Assembling the heterogeneous elements for (digital) learning

Month: January 2013

Exploring connected versus/and networked learning

On a very wet, Australia day long weekend I’m hoping to explore some of the differences, similarities and connections between networked and connected learning. This is all part of my attempt to participate in #etmooc which is currently looking at “Connected Learning – Tools, Processes & Pedagogy”. Networked learning is the term I’m most familiar with and it appears to have a longer history. I’m wondering where connected learning has come from, why and what does it offer as a concept?

Another shot of the creek

#etmooc and Connected Learning

#etmooc’s connected learning introduction is hosted in a Google doc. It contains links to all of the resources, including the slides and a recording of the presentation. I’ve skimmed the slides and need to find the time to watch the presentation, Alec’s always worth a listen.

The “connected learning” term seems to derive from this infographic on connected learning and the folk who developed it. Their “What is Connected Learning” provides more of an overview, including principles (see the following table) and a research synthesis report.

Learning principles Design principles
Interest-powered Production-centered
Peer-supported Openly networked
Academically oriented Shared purpose

My vague recollection is that this work is coming from a newly announced collection of American academics and researchers. Search around various websites (e.g. a hub and the Connected Learning site) reinforces that and suggests they are doing some interesting work.

Networked learning

I can’t help getting over the impression that their formulation and presentation of connected learning feels a little like a refinement/prettying up of “networked learning”. The Wikipedia page on Networked Learning offers this definition

Networked learning is a process of developing and maintaining connections with people and information, and communicating in such a way so as to support one another’s learning.

. McConnell et al (2012, p. 4) suggest

The development of networked learning has largely been infl uenced by understanding
of developments in technology to support learning alongside thinking stemming
from the traditions of open learning and other radical pedagogies and humanistic
educational ideas from the likes of Dewey, Freire, Giroux and Rogers

(the Wikipedia page adds Illich). McConnell et al (2012) is actually one of the editors contributions to a book arising from the 2012 Networked Learning Conference, a regular bi-annual European conference.

As part of their history of Networked Learning, McConnell et al (2012) offer a section titled “A pedagogic framework for networked learning” which includes “six broad areas of pedagogy that need to be addressed when designing networked learning courses” (p. 8), they are:

  1. Openness in the educational process.
  2. Self-determined learning.
  3. A real purpose in the cooperative process.
    Which includes the sentence “If learners have a real purpose in learning, they engage with the learning process in a qualitatively different way”.
  4. A supportive learning environment.
  5. Collaborative assessment of learning.
  6. Assessment and evaluation of the ongoing learning process.

I wonder if a table can show the overlap here. Let’s start with the table from above summarising the principles for connected learning. Then, ignoring the difference between learning and design principles, see how the “six broader areas of pedagogy” (in emphasis) of networked learning fit?

Learning principles Design principles
Interest-powered
Self-determined learning
Production-centered
Peer-supported
A supportive learning environment which is described as “one where learners encourage and facilitate each other’s efforts.
Openly networked
Openness in the educational process
Academically oriented Shared purpose
A real purpose in the cooperative process which is described as promoting “positive interdependence” (p. 9)

Not a perfect fit, but certainly some overlap. The chapter goes on to talk about work within Denmark and end with a summary that includes

The various scholars and practices associated with networked learning have an identifiable educational philosophy that has emerged out of those educational theories and approaches that can be linked to radical emancipatory and humanistic educational ideas and approaches. It can on the one hand be seen to emulate and refl ect principles associated with areas of educational thinking, such as critical pedagogy (cf Freire 1970; Giroux 1992; Negt 1975) and democratic and experiential learning (cf. Dewey 1916; Kolb et al. 1974) . While on the other hand it is seen as an approach and pedagogy within the general field of technology mediated learning especially exploring the socio-cultural designs of learning as mediated by ICT and enacted by networked learning participants

The overview of the rest of the book provided by McConnell et al (2012) suggests that this community is interested in the increasingly prevalent, alternate theoretical notions (e.g. connectivism) and are exploring what they mean for networked learning. Following the path of that exploration looks interesting. Many interesting chapters in the book.

Back to connected learning and some blog prompts

The final chapter of the networked learning book introduced in the previous section – written by the editors of the book, considers four important questions including this one “Is networked learning a theory, practice or pedagogy?”. A question I wonder about the vision of connected learning which is claimed as “a model of learning that holds out the possibility of reimagining the experience of education in the information age”. But that is perhaps a bit academically navel gazing for late on a wet Sunday night.

Let’s return to a couple of the blog prompts Alec had at the end of his presentation

  • How important is connected learning? Why?
    It’s important. It’s important because it captures – for me at least – the current learning milieu. While I and others might argue about aspects of the particular definition of connected learning I point to above. It represents another perspective on the current learning mulieu that is also being examined by the networked learning folk above and those in the connectivist camp (and a few others).

    It’s important because it captures how I’m currently learning and represents perhaps the richest environment/method in/through which I’ve ever been able to learn.

    It’s important, perhaps, because it represents a stepping stone in the path towards what learning will become. If you accept that

    We are living in a historical moment of transformation and realignment in the creation and sharing of knowledge, in social, political and economic life, and in global connectedness.

    then 30/40 years (at most) into this transformation, we can only be looking at the earliest possible blurry outlines of what learning will become.

  • Is it possible for our classrooms and institutions to support this kind of learning? If so, how?
    It would appear very unlikely that existing educational institutions could effectively support this kind of learning. Focusing just on the specifics of this specific connected learning definition it is hard to see how the current grammar of school could possibly make sense of it. There are too many aspects of connected learning that would be seen “as nonsensical as an ungrammatical utterance”. “Interest-powered” in an era of National (e.g. standards-based) curricula is just one example.

    But then, I also think it will eventually change. I think it will change by bricolage, by accident, cultural and generational change, and unintended consequences. There won’t be one great vast top-down, strategic change driven by politicians and formal change processes. It will instead be the gradual accumulation of small changes that will lead to large slippages and change.

    But I could be wrong.

  • What skills and literacies are necessary for connected learning? How do we develop these?
    By taking “Connected learning 101”? Perhaps not. Don’t we learn by doing? By participation?

    I wonder if the desire to pre-identify the necessary skills and literacies doesn’t reveal an unquestioned tendency of educationalists of having to identify what we need to teach. I wonder if the skills and literacies required to engage in this “historical moment of transformation” (actually I find myself questioning the phrase “transformation” which strikes me as so one-off, won’t the transformation be an on-going process of transformation?) will emerge from participation and continue to emerge as those skills feedback into the transformative process. Does it make more sense to ask what appear to be useful skills and literacies for today?

  • What are limits of openness in regards to privacy & vulnerability? Are we creating or worsening a digital divide?
    An interesting question that I can’t answer just now, but which I find interesting.

    However, I do think that Ross (2012) may offer some interesting insights. This is another chapter from McConnell et al (2012). The closing paragraph of the introduction to the paper is

    I what follows, I propose a set of (often conflicting) norms and expectations widely associated with blogging. These cluster around themes of authenticity, risk, pretence, othering, narcissim and commodification. I explore how these are reflected in the assumptions and practices of students and teachers, an go on to argue for greater attention to be given to the nature of online reflective writing, and a more explicit and critical engagement with the tensions it embodies.

References

Mcconnell, D., Hodgson, V., & Dirckinck-holmfeld, L. (2012). Exploring the Theory, Pedagogy and Practice of Networked Learning. In L. Dirckinck-Holmfeld, V. Hodgson, & D. McConnell (Eds.), Exploring the theory, pedagogy and practice of networked learning (pp. 3–24). New York, NY: Springer New York. doi:10.1007/978-1-4614-0496-5

Ross, J. (2012). Just what is being reflected in online reflection? New literacies for new media learning practices. In L. Dirckinck-Holmfeld, V. Hodgson, & D. McConnell (Eds.), Exploring the theory, pedagogy and practice of networked learning (pp. 191–207). New York, NY: Springer New York. doi:10.1007/978-1-4614-0496-5

1000 blog posts – a time to look back

According to the WordPress dashboard for this blog this is the 1000th published post (I have 100 odd drafts that I never finished or thought better of posting). Given I’m about to mark first year in a new job in a new institution in a new region, undergo my annual performance review and commence a new academic year, it would seem time to reflect and think about the future.

But first, thanks to those folk who have read and contributed to the blog over the years. Much of the good from blogging has arisen from those connections.

Reflections on this process

A few reflections from writing the summary below.

The staying power of bad ideas

Many of the problems I see with institutional attempts to support quality learning and teaching remain. Mainly, I would propose, because it’s easier to accept the simple practice everyone else uses than try and address the known problems. Examples of this include strategic management of universities (perhaps the fundamental cause of the rest), attempts to make teaching conform to a standard (quality through consistency), the reliance on end of semester student evaluation to determine quality of teaching and the teachers.

Purgatory was the best time to blog

2008 through 2010 was the best time for blogging. There was more engagement with and reflection upon the literature. Need to get back to that.

Perhaps the workload associated with life as a contemporary university academic, lots of centralised hoops, poor systems etc.

Still think the quality of my blogging – in terms of the writing, the referencing, the depth, the contribution, the connections – could stand for some improvement.

Workload implications

My blogging last year wasn’t great. Certainly less (quantity and quality) than in 2008-2010. One reason for that is workload. The workload of a contemporary Australian academic in a regional university is high. In terms of the workload and the nature of the work there are certainly much worse jobs (but to some extent that evaluation is fairly subjective). But the problem with academia in this context is that the workload is increasingly be increased due to management mandates at the expense of research/thinking time. Subsequently academics are getting in trouble for not producing research outputs. To make it worse, some of that workload is being created by the poor quality of the institutional systems being put in place to help meet those mandates.

What to do

I’m not going to bother with grand plans. To some extent it’s wasted effort. You can’t predict what is going to happen, at best you can only create the capability to respond.

I need to blog more and to think about different approaches to improving the blogging. I need to focus more and stop retreating to some of the same topics. I need to start producing some work that moves some of these forward a bit. I need to connect and comment more, get out of my shell.

Perhaps I need to do more work to address the workload implications. Give myself some space by exploring solutions to the workload problem, especially around learning and teaching.

Origins

The first blog post was written in March 2006 when the blog was on a self-hosted version of WordPress (most of the links to blog posts within those early blog posts won’t work, they still point to the old host). So it’s taken almost seven years to clock up 1000 published posts.

At that stage I was thinking about using blogs in the teaching of C++ to first year Information Technology students. The first few posts are little more than experiments trying to find out how WordPress worked and what it felt as an author/commenter.

It’s not until the 4th and 5th posts (in August 2006 suggesting the initial plan died) that I start storing away a bit of knowledge. The 5th post has a couple of Drucker quotes which still resonate strongly with me and perhaps should inform the “thinking about the future” part of this post. e.g.

Planning’ as the term is commonly understood is actually incompatible with an entrepreneurial society and economy….innovation , almost by definition, has to be decentralized, ad hoc, autonomous, specific and microeconomic.

Not long after that is the first post about BAM (which I’ve just edited to point to the new home for BAM/BIM. It was actually being used at this stage by a Masters-level course in Information Systems, some of that story is told in this paper.

That experience led to the first mention of Web 2.0 course sites in September 2006. Given this 2013 post this appears an issue that remains open. Somewhat surprisingly a similar post from that time shows I was reading Steve Wheeler’s (@timbuckteeth) years ago.

It would seem that late 2006 saw me on quite the kick around this idea.

It’s also when I started work on the “The missing Ps” an attempt to develop a framework for thinking about technology adoption (and what was missing) in universities. This ended up with my first presentation of the idea which is also my first Slideshare presentation. That eventually morphed into the Ps Framework and became the lens for the analysis done in the literature review of my thesis.

The move to middle management

In February 2007 I started a new job. For various reasons this was not a long term role, nor was it from a number of perspectives a successful one. But it did commence my move away from being an Information Systems/Technology academic, which given shrinkage in those disciplines over recent years is probably a positive.

The group did some interesting things. I got my chance to help implement a Web 2.0 course site and I blogged a bit more. Time in that job reinforced the silliness of much of what central L&T organisations/management do to encourage quality learning and teaching. I also participated in the ASCILITE mentoring scheme in 2007.

I did propose the idea of extreme learning during 2007. We also got involved in PLEs. Both ideas never really went far at the institution.

We also engaged in a bit of an exploration of what students find useful.

Purgatory and the PhD

By the end of 2008 the writing was on the wall. It appeared likely I’d be made redundant and for other reasons decided it was time to move my website into a WordPress blog. The self-hosted website I’d started 14 years previously had to go away.

What perhaps irked me most about that move was the Google ranking. In 2008, the website was 7th on a Google for “david jones”. Given the commonality of the name, this was quite nice. I wonder where it sites now? (I realise that Google’s search algorithm etc has changed considerably over time). After 16 pages of Google results I won’t go any further. The “David Jones” chain of stores consumes most of those. “david jones -store” removes most of those, but still no me. Of course, the blog is hosted in the US and it’s hostname is davidtjones etc. A google of “david jones blog” returns my blog as the 3rd result.

Late 2008 saw us purchase a new bull.

Wandilla Zanzibar - Big Z

By the end of 2008 I did get to visit Paris with my wife. She presented a paper and I got an award which had more to do with my esteemed PhD supervisor (as it happens I’ll meet up with her again today) than me.

Eiffel from Trocadero

During the Paris trip I found out that I was getting a redundancy, but it wasn’t entirely clear what I would be employed as.

It was during this time that I first posted about the silliness of L&T evaluations, academic staff development, and minimum standards for course websites. Somethings which four/five years later has changed little.

During this time I did finally start working fairly consistently on the PhD which did eventually get finished. I also worked and thought more about BAM and BIM. Must get back to the idea of cooked feeds for BIM. Other vague ideas and interests from that time include: reflective alignment, task corruption, the myth of rationality, the fad cycle and management fashions, the grammar of school, nudging, the Chasm,

Early 2009 saw us become the breeder of race horses!!

Malina - the new money burner

July of 2009 also saw commencement of work on BIM

More importantly it was this period that really saw the growth of my PLN. e.g. this post which mentions Mark Smithers (@marksmithers) and Claire Brookes (@clairebrooks)

It also saw the start of the Indicators project our little foray into “learning analytics”.

By the end of July 2010 I was finally made redundant.

Student life

Initially I became a full-time PhD student and a Dad.

Kronosaurus corner

By the end of 2010 I’d just about finished the PhD and enrolled to become a high school teacher. By January 2011 the thesis was finished, by May it was accepted and graduation was July.

Dr Jones

While I kept blogging about educational technology stuff, most of the blog for 2011 was spent reflecting on what I was doing in the Graduate Diploma of Learning and Teaching. The experience as a student of institutional e-learning was interesting.

And then in November I had the opportunity to return to “life” as a University academic.

Academia redux

So almost a year ago today I started life as an education academic. Still feel like an immigrant, which is probably a good thing. A year of teaching other people’s courses will be redressed somewhat this year.

The blog posts last year focused on understanding the courses I was teaching, grappling with the current state of institutional e-learning, a bit of work on BIM, and sharing a bit of research thinking and writing.

Some statistics

  • 2012 – Visits: ~83,000; 92 posts
  • 2011 – Visits: ~70,000; 142 posts
  • 2010 – Visits: 59,880
  • 2009 – Visits: 48,482
  • 2008 – Visits: 3,181
    Moved to WordPress.com in October 2008.

Taking a look at the "Decoding Learning" report

Late last year Nesta – a UK-based charity – released the report Decoding learning: The proof promise and potential of digital education. Nesta commissioned the London Knowledge Lab and the Learning Sciences Research Institute at the University of Nottingham to “analyse how technology has been used in the UK education systems and lessons from around the world. Uniquely, we wanted this to be set within a clear framework for better understanding the impact on learning experiences”.

The following is a summary and some reflections on my reading of the report. I’m thinking of using it as a resource for the course I’ll be teaching soon.

If you’re after a shorter summary, the Nesta press release might provide what you’re looking for.

Reflections

While there appears some value in the themes of learning, I thought there were some definite grey areas in terms of innovations being allocated to particular themes.

That said, the collection of examples of technology use divided into these themes provides what I see as a very useful resource for pre-service teachers. It gives them a taste of what is possible and what good uses of technology look like. This is something I think might be valuable in the early days of the course. Start with concrete examples, before getting into the theories and the planning.

The idea of using this list and the themes as the foundation for the co-construction of some sort of database or site of examples. A list students could add to through their explorations. Perhaps later expanding on each of the examples by suggesting what learning theories, curriculum elements, year levels, etc might be relevant to each example.

There are also a few other points apparently useful for a pre-service teacher thinking about ICTs (i.e. reflect some of the limitations of thinking about ICTs that I saw last year)

  • Starting with the learning theme, rather than the technology.
  • The point about linking learning activities across themes and experiences to reinforce learning and other plusses.
  • The importance placed on context. The ecology of resources model may be useful in scaffolding some thinking.

Chapter 1 – Introduction and scene setting

Key questions for education

  • Has the range of technologies helped improve learners’ experiences and the standards they achieve?
  • Is this investment just languishing as kit in the cupboard?
  • What more can decision makers, schools, teachers, parents and the technology industry do to ensure the full potential of innovative technology is exploited?

Digital technologies have a profound impact on management of learning but “evidence of
digital technologies producing real transformation in learning and teaching remains elusive” (p. 8)

“Our starting point is that digital technologies do offer opportunities for innovation that can transform teaching and learning, and that our challenge is to identify the shape that these innovations take.” (p. 8)

Has been much research. “synthesising reviews do find some evidence of positive impact” but there are 2 complicating factors that limit these findings

  1. the evidence is drawn from “a huge variety of learning contexts” (p. 9).
  2. “findings are invariably drawn from evidence about how technology supports existing teaching and learning practices, rather than transforming those practices” (p. 9)

Learning themes

Based on the learner’s actions and the way they are resourced and structured, the report is organised around 8 effective themes

  1. Learning from experts.
  2. Learning through inquiry.
  3. Learning with others.
  4. Learning through practising.
  5. Learning through making.
  6. Learning from assessment.
  7. Learning through exploring.
  8. Learning in and across settings.

Research process

Used both research and “grey” (blogs etc) literature

  • Review of last 3 years of academic source – 1000 publications – from which 124 research-led example innovations chose. Relevant reviews and meta-reviews included.
  • Informal literature identified 86 teacher-led innovations from a pool of 300.
  • The combined 210 cases form the basis for the report.
  • Use a comparative judgement method/tool (described in an appendix) to have 150 innovations ranked/compared by a group of experts.

There is an Excel spreadsheet with the top 150 innovations, including URLs.

Chapter summary

  • Chapter 2 – discusses evidence of innovation in each of the learning themes.
  • Chapter 3 – how are the 8 themes related and how they can be linked by technology to produce a rich learning experience.
  • Chapter 4 – looks at how learning context shaped the impact of new technologies on learning.
  • Chapter 5 – identify what needs to be done if innovative and effective uses of technology in education are to happen.

Chapter 2 – Learning with technology

Using the learning themes. Explains the type of learning and then present examples of the 210 innovations with the greatest potential.

  1. Learning from experts.
    Highlights are

    • The increasing wealth of online resources offers great potential for both teachers and learners; but places great demands on both to evaluate and filter the information on offer.
      So YouTube videos, e-books etc fit here.
    • Innovations in Learning from Experts have tended to focus on the exposition of information rather than fostering dialogue between teachers and learners.
    • Digital technologies offer new ways of presenting information and ideas in a dynamic and interactive way. However learners may need the support of teachers to interpret those ideas and to convert that information into knowledge.
    • New forms of representation (e.g. augmented objects) offer the potential to enrich the dialogue about information between teachers and learners.

    The Mathematics Image Trainer (described in this paper, the paper offers some theoretical/pedagogical rationale for why this type of approach is important for learning mathematics) is an example. Luckin et al describe how it allows the teacher to focus on asking the student to explain what they think is happening. Hence the innovation is framed as “a powerful too to enhance discussion between the teacher and the learner”

    Aside: I wonder how hard this would be to implement using ARSpot. Might make it more widely available since it would only require a Windows computer with a camera and the AR spot print outs. Rather than a Wii or similar

    Tutorial and Exposition are the two kinds of interaction between learner and teacher. Mentions Bloom’s suggestion that one-to-one teaching is the most effective way to learn. Methods that represent traditional approaches to teaching and that many examples of technology build upon (e.g. Khan Academy)>

    Mentions lots of different examples.

  2. Learning with others.
    Considerable enthusiasm. But academic research not filtering into the classroom. Teachers’ awareness of tools needs to be raised. “Priority should be given to developing tools that allow teachers to organise and manage episodes of joint learning.

    Identify four social dimensions

    1. collaborative – help learners develop mutual understanding.
    2. networked – help learners interact.
    3. participative – help learners develop a strong community of knowledge.
    4. performative – allow the outcomes of collaborative learning to be shared.

    Three promising areas for development:

    1. representational tools that enable activities taking place to be presented to other learners;
      e.g. technology-enhanced spaces for acting; tools for capturing and sharing on-going achievements…
    2. scaffolding tools that provide a structure for learning with others.
    3. communication tools that support learners working at a distance to collaborate.
  3. Learning through making.
    Making and sharing is “one of the best ways people can learn”. One example is the construction of an environmental sensor and linkage with a mobile phone app.

    Highlights of the section are:

    • Success rests on two principles: learners must construct their own understanding; and create something they can share with others.
    • Digital technology can bring it alive by making it possible to construct just about anything and share, discuss, reflect and learn.
    • The motivational aspect and benefits of producing real world outcomes of learning through making frequently cited in teacher-led innovaitons.
    • Depend son the appropriate use of digital tools in suitable environments.

    Mentions Papert and constructionism. Links to to Logo and computer programming. Mentions Maker Faires etc.

    Most of the innovations are teacher-led, apparently little research.

    • Examples help learners construct notes and other material to improve their learning, electronic outlining tools, learners developing presentations based on the information they collected during visits.
    • Scatch mentioned, also blogging and storytelling through Web 2.0 applications. ZooBurst create 3D pop up books with augmented reality features a bit like ARSpot. But also a bit more than that.
    • 3D printing gets a mention.
    • The need for teacher support, they help students to learn how to use technology critically link multiple representations and make the connections between individual learner’s constructions and whole class understanding.
  4. Learning through exploring.
    Learners have always browsed, but information is abundant. Need to new skills/strategies. Technology can help. 3D simulations, visualisations, technology-augmented spaces. Found few examples of innovations in this theme Gives electronic blocks as one example.

    Includes work where learners search or browse information or engage in playful, game-like interactions. It can be opportunistic or more structured.

    • two principles: learners are given the freedom to act; they need to regulate their own actions (which is itself an important skill for learning).
    • Digital tools provide new ways to explore information and structure the environment to explore.
    • Limited research studies suggest it is underused and undervalued.
    • The few examples were of high quality suggesting potential.
  5. Learning through inquiry.
    Exploring the natural or material world by asking questions, making discoveries, and rigorously testing them. Technology may help organise inquiry or connect learners’ inquiries to real world scenarios.

    Enables learners to think critically and participate in evidence-based debates. More structured towards an end than learning through exploring. Seen to include: simulation, case-based learning, problem-focussed learning and scripted inquiry. The degree of structure varies.

  6. Learning through practising.
    Perhaps the most contentious application in some areas, but probably the most used. Helping learners practice skills. Most effective when a variety of representation and interactions are used and doesn “simply sugar-coat uninspiring or unchallenging activities”.

    Practice builds foundational knowledge to be used in other context. use of tech in this sphere is rarely seen as innovative, but plusses include rich multimodal environments used to create challenging problems and appropriate feedback.

    Zombie Division is given as an example. Which leads me to the “Serious Game Classification” site. Another example has kindergarten students using a digital dance mat to practice/compare number magnitude. Light-bot is nice, appeals to the inner-programmer in me.

    Hello programmed instruction.

  7. Learning from assessment.
    Being aware of what a learner understands is fundamental to increasing their understanding and knowledge. Technology can help: compile learning activities and enable both teachers and learners to reflect upon them; track progress of learning and present that information in rich and interactive ways. There is little innovation in technology-supported assessment. Research innovation is modest. Most innovative focusses on self-assessment through reflection, not teacher-led. Most innovation is based on summative assessment of traditional subject. More work on formative assessment and assessment of other skills is required. Suggests learning analytics holds promise. Also e-assessment using social networks and other technologies that facilitate peer, collaborative and self-guided learning.

    The subtle stone is used as a way to gain insight into students’ emotion.

  8. Learning in and across settings.
    Context of learning plays an important role in the quality of learning. Knowledge is deepened when applied across different locations, representations and activities. Technology provides a variety of devices to capture, store, compare and integrate material from a variety of settings.

    Key success factors

    • Understanding what parents really need in order to get them involved;
    • Recognising that activities designed for school are not necesarily transferable to the home (and vice versa)
    • providing on-going support and ensuring use of technology at home is purposeful.

    Purple mash is used as an example of transferring learning between home and school. Augmented reality for field trips gets a mention and uses of mobile devices to support field trips etc.

Chapter 3 – Bringing learning together

“To achieve a more rich, cohesive, and productive learning experience, we must consider the
links that exist between different learning activities within and between themes.” Providers learners with a coherent episode. Reinforces learning and strengthen future learning.

Suggests the following

  • Learning themes are mad up of
    • Learning activities (e.g. creating an animation) which are connected/embedded across different themes into
    • Learning episodes (e.g. lessons, projects units) that are linked/sequenced to create..
    • broader Learning Experience at class, school etc levels.

Linking learning activities

57% of examples encompassed two or more forms of learning. Some with different learning activities within the one theme, others had learning activities across multiple themes. Often there would be a primary theme and another used as supports.

Through making, with others and through exploring most often used in a supporting role.

Chapter 4 – Context is important

Context is crucial for success with technology. Realising the potential of digital tools is contingent on how we use them and the context of learning.

Uses one of the author’s models – Luckin’s Ecology of Resource which essentially

  • Has the learner surrounded by
    • Environment;
      Most examples from formal schooling – primary and secondary. The classroom may have specialist equipment/expertise that makes it easier. Digital tools tend to be usable in many environments.

      All learning environments have formal/informal rules for behaviour of teacher and student. This can limit technology use. Existing infrastructure may also limit it.

    • Knowledge and Skills;
      The way knowledge is organised shapes learning. e.g. separation in disciplines. Certain learning activities better suit some subjects. The whole question of what is knowledge is also a factor.
    • People;
      Teachers’ have a role to play in having innovations succeed. PD is an issue. Peer learners also impact on learners. Technology can help. Not to mention other people within school – technical staff, teaching assistants, leadership etc. Not to mention the broader community.
    • Tools.
      Breaks digital tools into hardware, applications, networks and platforms. Mentions infrastructure. Cloud computing. Thin clients (dead already). BYOD not mentioned. Lists three factors that can constrain wider adoption: cost, complexity, safety.
  • Between those and the learner are a set of filters.
  • Understanding these helps predict likely impact of technology and help roll them out.

Chapter 5 – Bringing research into reality

Understanding how technology can be used to improve learning is only part of the answer. Systemic challenges need to be addressed.

Learning from the evidence

Repeats the adage that technology alone won’t improve education, “we need to make better and more creative use of them” (p. 59)

The most compelling opportunities to improve learning through technology are

  • Improve assessment.
    “too little innovative technology-supported practice in the critical area of Learning from assessment“. Don’t restrict it to the end of a learning episode and don’t make it “dull or dispiriting”. Learning analytics, adaptive assessment and the potential for instant statistics, knowledge maps, class data and badges. Also, how to assessment knowledge and skills such as collaboration and leadership.
  • Learn by making.
    Lots of digital tools being used in making. Coding, robotics kits etc. But “careful consideration needs to be given to how the process of making leads to the desired learning outcome”.
  • Upgrade practising.
    The longest and most popular aspect of technology. “But not all types of practice are equally beneficial”. It is most effective when it involves rich, challenging problems with appropriate feedback, rather than on easy activities. Challenge here is in determining which ones are most effective, for whom and in what context.
  • Turn the world into a learning place
    Most learning is in school and escaping the constraint of location is not simple. But digital tools enable this. It can “link learners with other learners, experiences and settings”. We need to stop thinking of learning taking place in isolation, in schools.
  • Make learning more social.
    Promote better teacher/student discussion and learner/learner discussion. Use technologies to create audiences for participatory or performance activities.
    • Key priorities for technology in learning

      • Link industry, research and practice.
        The gap between these groups is problematic. Advantages to all 3 if connected. Role of government and other stakeholders. Informal connections help, but formal connections required.

        The role of context in research also needs to be mentioned to enable comparisons.

      • Make better use of what we’ve got.
        WHile access to technology is important, an emphasis on hardware limits examination of other opportunities.

        Teachers need to move to a “think and link” approach where tools are used in conjunction with other resources and a variety of learning activities. Teachers need to be able to digitally “stick and glue”. Teachers need ways to share ways of using new technologies.

      • Connect learning technologies and activities.
        “Linking learning activities and using a variety of technologies and approaches” can lead to a richer experience. “Focusing on individual learning activities with single use technologies will not achieve the maximum impact”

        But the tools aren’t there yet.

A sign of limitations of institution hosted e-portfolios? And cost as the ultimate driver

E-portfolios have increasingly been seen as a good thing. A welcome innovation in assessment practices that show evidence of institutions improving their learning and teaching. At the same time, there’s been an increasing question about whether a student’s e-portfolio should be hosted on an institutionally owned system. Various questions arise, including

  • What happens when the student, as many often do, start studying at another institution?
  • What happens when the institution decides not to support the e-portfolio system any longer?

Around this same time has been the growing question of just how much technology should be provided by the institution given the increasing wide-spread availability of technology? Back in the mid-1990s Australian Universities were providing students with dial-up Internet access. They don’t do this any more. Email addresses? Mostly hosted by Google or other service providers?

How long can/will institutions provide e-portfolio systems?

A sign of the limitations of institutional provision of these systems is when you get an email from the folk supporting the institutional e-portfolio asking for details of assignments you’ve set that use the e-portfolio. This is so they can be aware of the peaks and be prepared for them.

I’m pretty sure WordPress.com don’t email their users asking for help in identifying peaks. Instead the have the infrastructure and people in place to deal with the peaks. Can an institutional e-portfolio system ever hope to have the same capability? Or will the expense of doing so be what convinces the institution to allow students to use their own technology?

Cost as the ultimate driver

Central IT and support organisational units are loathe to give up their systems and subsequent control. Even when there are better systems available externally. However, it appears that there is definitely a trend where cost becomes the ultimate driver/change agent. The reason they give up ownership is when its demonstrably cheaper to out source than provide an equivalent level of functionality.

That’s what has happened with the provision of Internet access, student email accounts and increasingly in the school sector it is the driver beyond the adoption of bring your own devices/technology (BYOD/BYOT).

What does it say about organisations – especially educational organisations – when the technology choices are driven more by ownership, control and cost than what is best for the organisations, its members and stakeholders (can’t bring myself to use client/customer).

Engaging with #etmooc – how and what perspective

#etmooc has commenced. The flood of introductory emails flowing from the #etmooc Google+ community is a sure a sign as any. The questions begin. How effective are all these introductions? How will I engage with the course? Why am I engaging?

Why engage?

The main reason is perhaps that this is the area I teach. Educational technology, especially the considerations of the application of educational technology with schools. #etmooc provides an opportunity to connect with a wide array of people in this area. Including quite a few with some very interesting perspectives and insights.

Some other reasons I’m hoping to engage

  1. To model the type of practice I’m hoping to see from the students in the course I’ll be teaching soon.
  2. To identify some activities, communities, and people that I can point my students toward.
  3. To connect with some new ideas and reflect on how that might change my practice.

How to engage

This is the $64K question. Will my participation crash and burn in this cMOOC, just like all those others? Will I make the time to engage?

Adding to this potential issue, is the fact that I haven’t yet found the “syllabus”. What should I be doing. I can see the Topics & Schedule which gives the overview, but what should I do, read, watch, listen to?

The guide for participants has some useful information. A lot of which is familiar. A sign that I’m a member of some level of this particular culture?

The guide for facilitators is also interesting. I like the quote from Herbert Simon

Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can influence learning only by influencing what the student does to learn

But where is the list of tasks for me to engage with? Am I showing my traditional learner origins? Am I simply being thick and can’t find it?

Of course it would be on the etmooc blogs as the orientation week activity. Thanks to John Johnston’s introductory #etmooc video for the pointer.

My Introduction

I’ll take the easy/cop out approach to the task, and reuse one I’ve done before. This type of introductory task is something that Alec has used before and I liked it. I borrowed it and used it in my own teaching last year. As part of that task, I created a popplet introducing myself. That plus the above seems to fulfil the task requirements.

Do the introductions work?

One of the aims of the #etmooc introduction is to “help participants better relate and connect with you”. Based on the evidence in my InBox – lots of introductions – and the Google+ etmooc community – not many of those introductions having replies – there may not be that much connection. Perhaps there are lots of people lurking, viewing introductions and connecting via means other than the Google+ community?

The variety of tools being used is useful (e.g. this use of Animoto by Mairead but are there connections being formed? Are there better ways of forming them?

For my course there will be a focus on reflection. An introduction like this – especially the “what to gain from the course” – is intended to be used as a reflection later in the course.

How do you help folk starting out in a course like this make connections?

The quality of the introductions is certainly on aspect. As is the ability to see some commonality (e.g. this example of a Brazilian connection). Would making the commonalities easier to see be a good thing? Would it close off the chance of diversity?

Making some "3100" thinking explicit

In around two months a couple of hundred pre-service teachers will be wanting/required to start engaging with the course EDC3100, ICTs and Pedagogy. For the last 3 or 4 months I’ve been reading a bit and generally mulling over what I’ll do and how far to go. Back in July I started off with this initial post. It’s now (past) time to make some of this explicit, make some design decisions and implement it. This is the start. Littered through the following will be questions and reminders to myself for further consideration.

If you have thoughts and criticisms, now would be a good time to contribute. Any and all feedback is very much appreciated. Littered through the following are Questions I’ve left myself. Feel free to make suggestions.

This post has been an on-going development process over a week or so. It’s a somewhat organised collection of thoughts, but it is time I stopped following leads, posted it and started seriously thinking about the implementation specifics. That will be next week’s task.

The rationale

Lisa Lane writes in this post about what she sees as the purpose of MOOCs

the opportunity to exploit the opportunities of the web, to form learning communities, to blow apart top-down teaching models, and to create something meaningful and valuable to participants.

While EDC3100 will not be a MOOC, Lisa’s description resonates strongly with what I’m trying to do and what’s influencing my current thinking. Here’s a bit more of the rationale.

Transformation

EDC3100 aims to help pre-service teachers figure out how to design learning and teaching in “ICT enriched environments”. Larkin et al (2012) suggest that how academics model the use of ICTs in learning and teaching is an important factor in developing the ICT competence of students. Our experiences shape us. The use of ICT in the courses taken by pre-service teachers are shaping their knowledge of ICTs and Pedagogy. This begs the question about the types of experiences with ICTs for learning that students gain from EDC3100 and whether it be better?

How do you evaluate “better”? Well one approach is to use something like the SAMR model (see the following image). There are similar frameworks/models, including: the RAT Framework and the Computer Practice Framework. But the story is just about the same. Using SAMR most of the use of ICTs in prior offerings of EDC3100 site within the enhancement stage.

SAMR model image from Jenny Luca’s TPACK and SAMR blog post

Is there any wonder then that much of the use of ICTs we see when our students head out to teach also appear at that same level? Certainly what they see in 3100 is not the only, or even the major, reason the students struggle to be transformative (3 weeks in a new school in someone else’s class working to their plans isn’t a great context for transformation) in their use of ICT and pedagogy. However, it would seem important for a course like 3100 to provide them with an opportunity to observe and participate in some examples of transformation.

Question: How do we provide students with the preparation and opportunity to demonstrate/experience the design of transformation with ICTs?

Question: A model like SAMR might solve a problem observed last year where pre-service teachers weren’t aware that there was more to ICTs and pedagogy than using an Interactive White Board to show a YouTube video. Or that having the students use Popplet to create a mindmap perhaps wasn’t a great advance over butchers papers and pens. Are there better alternatives to SAMR? The Computer Practice Framework adds a few extra considerations linked to common mistakes.

Building confidence and experience

Hammond et al (2009) identifies personal experience of using ICT as an important contextual factor explaining why a pre-service teacher will use ICTs. This experience (Hammond et al, 2009, pp 70-71)

gave student teachers the confidence to use ICT, allowed them to develop effective strategies for learning new skills and gave them an awareess of the value of ICT based on its application in their own learning

Given this, it would appear important for EDC3100 to provide a space for the pre-service teachers to expand their experience with ICTs and develop their confidence with them. To provide them with a foundation.

Hopefully we can help them develop their tech support skills and perhaps build on the advice from xkcd.

xkcd comic strip “Tech support cheat sheet”

Barton and Haydn (2006) found that the sheer volume of information about using ICT and pedagogy can overwhelm students. The volume of information, or its rate of increase, is not likely to have significantly reduced since 2006. Consequently the course has to help students develop the confidence and experience in dealing with this volume. We have start their journey towards becoming, as @palbion describes it, expert learners.

Question: Where are the good insights, models, theories etc around designing and scaffolding people’s development of these skills? Information literacy?

Reflection and feedback

Hammond et al (2009) quote a range of authors to identify structured reflection on the use of ICT as a powerful technique in developing pedagogic understanding. Similarly, “assignments which relate ICT to developing practice can be influential” (Hammond et al, 2009, p. 60). In the version of EDC3100 that I taught last year, there wasn’t a lot of opportunity for students to engage in reflection. Even less opportunity to receive feedback on those reflections. Hopefully in 2013 we can change this. Increasing the levels of reflection and feedback will hopefully improve learning. Feedback must be good since Hattie identifies it as what works best for improving learning.

Question: What are the good insights/models/theories etc around designing activities that encourage reflection?

Learning to be

EDC3100 is a professional course. It’s in the 3rd year of a four year degree program that is hopefully producing effective teaching professionals by the end. This is why I’m interested in how EDC3100 can help the pre-service teachers taking the course “learning to be” a teacher. Seeley Brown (2008, p. xii)

perhaps even more signifi cant, aspect of social learning, involves not only “learning about” the subject matter but also “learning to be” a full participant in the field. This involves acquiring the practices and norms of established practitioners in that field or acculturating into a community of practice.

I’m hoping EDC3100 can encourage/enable the students to engage in social learning with other students and join in broader teacher networks. To really engage with the task of “learning to be” a teacher.

Diversity, personalisation and pedagogies

During 2013 there are likely to be 300+ students take the course. They will be a very diverse student cohort on a number of criteria. For example, in terms of technical skill last year’s cohort included everything from an ex-IT development professional through to technophobes. As the graph below shows students’ ages ranged from 18 through 60. Though over 50% of the students were traditional 3rd year students straight from high school – aged 20 years old.

age Distribution

There was also significant diversity in the type of teacher they were preparing to be. EDC3100 last year included “pre-service teachers” training to teach everyone from pre-school, primary school, high school and vocational education. Not to mention the different disciplines and knowledge areas these pre-service teachers will cover. Barton and Haydn (2006, pp 267) suggest that

Training needs to be differentiated to take into account the differing ways in which ICT helps teachers of different subjects to improve teaching and learning

EDC3100 Sector breakdown

As it happens the university that employs me to teach EDC3100 has adopted “Personalised learning” as one of its four overarching themes for its 2022 Vision. Some of the institutional words around this theme include

We promise to partner with learners in the pursuit of their study objectives regardless of their background, location or stage in life.

Through innovation USQ harnesses emerging technologies to enable collaborative teaching and individualised learning for its students regardless of their location or lifestyle.

We understand our remarkably diverse and global student population. USQ seeks to accommodate individual learning styles and to provide students with personalised adaptive learning support. We are known for our capacity to research and anticipate technological advances and to capitalise on these.

Question: How does this type of sentiment match some of the broader ideas of personalised learning (e.g. this one)?
Question: How far can you take personalised learning within an institutional context with a significant focus on consistency and quality assurance (where consistency and quality are often equated)?

While perhaps not going quite as far as suggested by this high school student. There’s a lot to be said for aiming EDC3100 toward this goal

Let’s bring learning back to the learners. Why are we disregarding the brilliant work of progressive educators like John Dewey, Maria Montessori, Jean Piaget, and Paulo Freire? We need to allow students to craft their own learning experiences through projects, apprenticeships, and hands-on engagement. ….. My advice: Let’s get over the fads and understand that learning is best done through doing, creating, and exploring.

The dead viola player?

In critiquing xMOOCs and Learning Analytics Michael Feldstein identifies another reason driving the changes in EDC3100. Feldstein breaks the class experience into three parts and talks about how well they scale with technology. The parts are

  1. Content transmission;
    Which scales well. Feldstein gives Khan Academy as an example. Given that content is free and abundant, I’m hoping the teaching staff in EDC3100 can avoid duplicative content creation. e.g. we won’t be giving lectures in the traditional sense.

    Questions: What (if any) content required for EDC3100 is not free and abundant?
    How do we scaffold students’ engagement with the abundant content?

  2. Assessment;
    Which xMOOCs are scaling with MCQs and perhaps somewhat with peer assessment. But which doesn’t necessarily scale well if you’re trying to assess in areas that don’t come with “cut-and-dry answers”.
  3. Remediation.
    i.e. responding helpfully to students when they are stuck. The ability to identify “not only what a student is getting wrong on an assessment but why she is getting it wrong”.

It’s remediation that Feldstein identifies as what both learning analytics and xMOOCs don’t do well. He also suggests it’s been dead long before xMOOCs

My guess is that college professor productivity has risen in the last decade, if all you mean by “productivity” is number of butts in classroom seats per teacher. The cost has been less time to respond to individual student needs.

I’m hoping EDC3100 can revive the viola player a touch. There will be limit to how much the EDC3100 teaching staff can do this due to institutional workload and funding models and simply the sheer number of students. This is where Stephen Downes’ comment on Feldstein’s post comes in. In talking about how the cMOOCs addressed the problem the dead viola player

You don’t need an expert for this – you just needs someone who knows the answer to the problem. So we have attempted to scale by connecting people with many other students. Instructors are still there, for the tough and difficult problems. But students can help each other out, and are expected to do so.

Perhaps the crux of the problem for me with EDC3100 is framed by Stephen as

we need to structure learning in such a way as to make asking questions easier, and as necessary, to provide more incentives to people to answer them.

There are numerous barriers to this. Cousin and Deepwell (2005) identify the learner problem, in that they “arrive in the networked setting with ‘congealed practices’ from more didactic educational contexts”. Shifting the students out of their congealment can be hard and unsuccessful work. Not the least because the institutional context has its own problems with congealment that can create a fair bit of cognitive dissonance. Not to mention that the teaching staff bring a level of congealment. When I am thinking about what I will be doing as this course is happening, I find myself slipping back into my own set of congealed practices. Finally, there is just the question of discovering and evaluating what are the practices that will effectively replace the congealed practices.

This is where I’m hoping to benefit from the work of others such as Kop, Fournie and Sui Fai Mak (2011) and Weller (2011).

Question: What are some of the other useful sources of design insight?
Question: When am I going to stop navel gazing and start actually designing?

In arguing that MOOCs fundamentally misperceive how teaching works Mark Guzdial suggests that “the main activity of a teach is to orchestrate learning opportunities, to get students to do and think. A teacher does this most effectively by responding to the individuals”. How can we do this effectively in the context of EDC3100?

Synchronicity

As it happens there is currently a great deal of activity in this area. Other folk are putting together MOOCs and open courses and sharing the whys, wherefores and what. These offer some possibilities for learning/borrowings. Here’s an initial summary.

#etmooc

Alec Couros – one of the inspirations for much of what’s happening with EDC3100 – is involved with #etmooc – Educational Technology & Media. Given this connection and the topic of #etmooc being very closely related to the topic of EDC3100 I’m aiming to engage with it in a number of ways.

Alec, as is his wont, appears to have been very open in gathering input for the design of #etmooc. The evidence of this can be seen in posts from Lisa Lane, Alan Levine and others. Lisa’s post summarises some of the resources (including a Google community from which I’ve drawn a couple of ideas in one quick skim and the planning Google doc) and questions considered by the #etmooc designers. It’s interesting in terms of my lack of connections in certain areas to note that I’m only now becoming aware of some of Alec’s much earlier posts about #etmooc.

Unlike the typical weekly course (and cMOOC) schedule, Lisa and Alan suggest something more open. Topics that have a launch date, a basic introduction and then a community development process where the aims and resources for the topic are developed by the community. There’s much to like to this idea, but it probably falls to far outside the constraints of EDC3100. Not sure how the institution might take this approach and I think the students may have some queries about exactly what my role is in the course. EDC3100 will need to keep a bit more of the traditional schedule, but how much can we open it up from there?

Question: How much can I push the content of the course out of the institutional Moodle instance and into more open technologies?

Some insights/ideas from all of the above #etmooc related resources

  • If a course is successful in getting students engaged with networks, then the course doesn’t have an end date.
  • Large synchronous presentations have value for “introducing/advancing ideas and for tool demonstrations”.
  • With a distributed approach the effective use of tagging is necessary for aggregating networked conversations.
  • Designing interactions is key or as one of the comments put clustering opportunities. Helping people orient themselves by providing tasks/opportunities to find “people who you want to learn with”.
  • Entry to Twitter and this type of approach in general is difficult.
  • The idea of using git projects as a platform.
  • The influence of 23 things as a design aid or something like 100 ideas.
  • The importance of a central course space for orientation and perhaps community?
    It would also match students expectations.
    Question: If I create a central course space in both the institutional Moodle instance and in a WordPress (or some other open site) blog with the same information, which would students use the most?

OLDS-MOOC

Will be interesting to compare the Learning Design for a 21st Century Curriculum MOOC with #etmooc. Both are starting about the same time.

Emerging Learning Technologies

Curtis Bonk has released the syllabus for his “R685, Emerging Learning Technologies” course which is also closely related to the purpose of EDC3100. (It is interesting to see the evolution of the course from the 2012 version, especially in the objectives. One of the assessment options was to evaluate changes in syllabi from the course since 1990 ) But one difference is that EDC3100 tends to spend more time explicitly on pedagogy. Both #etcmoos and R685 don’t, at least as indicated by the course topics.

Some insights/ideas from R685

  • The grading scheme.
    Interesting in the flexibility that it gives the students. Six set tasks which each give points up to a total of 360 points. Your grade is based on the number of points you achieve. Perhaps a step to far here.
  • Tidbit reflections.
    A large collection of small online articles are provided. Students are asked to read 3-4 a week. The assessment is to submit a list of the top 20 (and the top 2-3 videos) and reflection on what was learned from them. The list of resources might be useful.
  • Discussion moderator.
    Students sign up to take on the role of discussion starter based on the week’s readings.
  • Participation is marked both for synchronous sessions and posts etc.
  • The qualitative criteria for posts.
    1. Diversity (some variety in ideas posted, and some breadth to exploration);
    2. Perspective taking (values other perspectives, ideas, cultures, etc.);
    3. Creativity (original, unique, and novel ideas);
    4. Insightful (makes interesting, astute, and sagacious observations).
    5. Relevancy (topics selected are connected to course content); and
    6. Learning Depth/Growth (shows some depth to thinking and elaboration of ideas);
  • Criteria for the reflective paper grading are also interesting
    1. Relevancy to class: meaningful examples, relationships drawn, interlinkages, connecting weekly ideas.
    2. Insightful, Interesting, Reflective, Emotional: honest, self-awareness, interesting observations
    3. Learning Depth/Growth: takes thoughts along to new heights, exploration, breadth & depth, growth.
    4. Completeness: thorough comments, detailed reflection, fulfills assignment, informative.
  • The collection of weekly resources.
    Useful because of the list of resources to pick over, but also as an example of the “MOOC-like” approach I’m thinking for EDC3100. Provide a collection of useful resources and have the students read and reflect. Perhaps search for some more. Interesting division into Readings and Tidbits.
    Question: How much and what learning activities should be set each week?

Should remember that R685 is a graduate course, 3100 is undergraduate. There is a large written component (as in formal report) in R685, will aim to avoid that.

Educators as Social Networked Learners

In Educators as Social Networked Learners Jackie Gerstein describes here graduate course – Social Networked Learning. Some thoughts/inspirations from this post include

  • Use of alternate media (e.g. Glog) for assessment/to capture learning.
    Certainly something I wish to explore in 3100. Part of the students learning what they do. They need to use the ICT tools we’re talking about to show and share their learning.
  • The Twitter assignment has some suggestions to be made for both how 3100 might use Twitter, but also some of the other tools
  • The PLE assignment’s use of Mindmap might be another way to balance broadening awareness of different tools.

Social media literacies syllabus

Via Jackie Gerstein’s post I come to Howard Rheingold’s Syllabus for Social Media Literacies. Like this

Literate populations are becoming the driving force that shape new media, just as they were the eras following the invention of the alphabet and printing press,. What broad populations know now, and the ways they put that knowledge into action, will shape the ways people use and misuse social media for decades to come.

and the five social media literacies

  1. attention;
  2. crap detection;
  3. participation;
  4. collaboration; and,
  5. network awareness.

Reminder: revisit this work to explore it’s applicability to 3100 and observe what comes out of #etmooc

Importantly, the following succinctly captures important aspects of the “design principles” I’d like to use

chose texts that can offer analytic tools, explanatory frameworks, and competing perspectives — the basic building blocks for teachers and learners to use.

and he makes the important point and likely problem with this type of approach i.e.

College students have been strongly socialized to do the homework for each class the night before it is due — a method that doesn’t work when discourse, not a discrete product like a term paper, is the goal. The necessity for more frequent informal discourse through forums, blogs, comments, usually needs to be repeated and reinforced.

Some other important points/ideas etc.

  • The criteria and resources on this introduction to forums could be useful (though I’ve seen similar elsewhere)

    4 Points – The posting(s) integrates multiple viewpoints and weaves both class readings and other participants’ postings into their discussion of the subject.
    3 Points – The posting(s) builds upon the ideas of another participant or two, and digs deeper into the question(s) posed by the instructor.
    2 Points – A single posting that does not interact with or incorporate the ideas of other participants’ comments.
    1 Point – A simple “me too” comment that neither expands the conversation nor demonstrates any degree of reflection by the student.
    0 Points – No comment.

  • The idea of students pre-submitting a substantial question that they are prepared to address in a f-t-f session.
  • Teams of (2) students creating mindmaps from key readings for the week.
  • I wonder how well the instructors introduction would match current requirements of my institution?

Introduction to openness

And then there is David WIley’s course “Introduction to Openness in Education being run on Canvas. Interesting insights and ideas include

  • The use of badges rather than grades.
    It’s not a graded course, but interesting to see the badge implementation. It’s one of the considerations for 3100 and the use of blog posts to “submit” the evidence. The course itself has a module dedicated to open assessment and open badges which could prove useful perhaps as an example of transformation for the 3100 students.

    Question: How would the awarding of badges work with BIM and by extension Moodle?

  • Explicitly makes the point that the learning artifacts are the students and are stored outside the LMS.
    Links this to a constructionist type quote from Terry Anderson.
  • The Anderson quote is from a post “Connectivying” your course. It draws on the work by Anderson and Dron in two papers, including this one
    Describes two defining characteristics of connectivism

    • Construction, annotation and maintenance of learning artifacts that are open and contribute to knowledge beyond the course.
    • Student be given the opportunity, incentive and support to form networks with others, including outside the course.
  • The technology requirements are likely to be very similar to 3100, though perhaps not the YouTube account? But perhaps that’s my textual prejudice showing through.
  • The structure of the course in Canvas looks very much like something you could do in Moodle.
    Though much of it seems to resemble the “ramble” approach from last year. i.e. it appears Wiley has provided much of the content. Not something I’d like to replicate.

A blog post on new pedagogy that includes pointers to lots of examples
http://www.contactnorth.ca/trends-directions/evolving-pedagogy-0/new-pedagogy-emergingand-online-learning-key-contributing

So what shape should this course take?

*the aim should be to keep it open and flexible, engage the students in innovative applications of technology, with the world outside, be reflective….??

Finding the pedagogy

From Rheingold comes a link to the Instructor’s guide to process-oriented guided-inquiry learning. Much of this is known stuff, but I’m repeating it hear to remind myself.

Near the start it quotes 5 key ideas about learning from current research in the cognitive sciences, that people learn by

  • constructing their own understanding based on their prior knowledge, experiences, skills, attitudes, and beliefs.
  • following a learning cycle of exploration, concept formation, and application.
  • connecting and visualizing concepts and multiple representations.
  • discussing and interacting with others.
  • reflecting on progress and assessing performance.

Ahh, that comes from Bransford et al (2000). That resonates with earlier thoughts. The POGIL document talks about a three stage learning cycle

  1. Exploration;
    Provide students a model (very broadly defined) to examine or a set of tasks to follow that embody what is to be learned. “The intent is to have the students encounter questions or complexities that they cannot resolve with their accustomed way of thinking” Guided by critical-thinking or key questions.

    Talks about three types of questions

    1. Directed questions – point to obvious discoveries about the model .
    2. Convergent questions – require synthesis of relationships from new discoveries and prior knowledge to develop new concepts or deeper conceptual understanding.
    3. Divergent questions – open-ended, without unique answers.
  2. Concept invention/formation;
    Also called introduction, the idea is that the learners develop insights into a construct that helps understanding develop.
  3. Application.
    i.e. do something with that new understanding.

Then come some implications for teaching

  • Organising knowledge in long-term memory
    Pattern recognition is enhanced by ask for comparions and contrasts with problems in different contexts; identify patterns in concepts/problems/solutions; classify problems in terms of concepts.

    Have students identify relevant issues and concepts, discuss why relevant and plan solutions. Brainstorming.

    Give time to develop deep understanding.

  • Overcome limitations of working memory.
    Help chunk and develop knowledge schemata. Encourage them to draw diagrams.
  • Analysing problems and planning solutions.
    Use an explicit problem-solving methodology. Instruct them in how to use it. Ask them to explain what was done. Compare their approaches with that of the expert.
  • Benefiting from meta-cognition.
    Assessing the approaches of others and identifying strengths, areas for improvement and insights is useful.
  • Transfer knowledge for use in new contexts.
    Have students talk about the relevance and usefulness of what they have learned. Figure out when and where it can be used.

Four roles for the teacher

  1. Leader – creating the environment, explaining th elesson etc.
  2. Monitor/assessor – touches on the dead viola player/remediation section above.
  3. Facilitator – interventions, asking questions etc.
  4. Evaluator

And a table

Steps in the process 7E equivalent Component of the activity
Identify a need to learn Engage An issue that excites and interests is presented
An answer to the question Why? is given.
Learning objectives and success criteria are defined
Connect to prior understandings Elicit A question or issue is raised, and student explanations or predictions are sought. Pre-requisite material is identified
Explore Explore A model or task is provided and resource material identified. Students explore the model or task in response to critical-thinking questions.
Concept invention, introduction and formation Explain Critical-thinking questions lead to the identification of concepts and understanding is developed
Practice applying knowledge Skill exercises involve straightforward application of the knowledge
Apply knowledge in new contexts Elaborate and extend Problems and extended problems require synthesis and transference of concepts
Reflect on the process Evaluate Problem solutions and answers to questions are validated and integrated with concepts. Learning and performance are assessed.

Question: What changes would be required to this table to better encourage the formation of the community and culture aspects mentioned by Gardner and the other features I’ve talked about above? Not to mention some other differences.

Casey and Evans (2011) cite/describe Nuthall’s (2007, p. 36) four premises for learning

  1. Students learn what they do, and what they are learning is what you see them doing: writing notes, coping with the boredom without complaining, and later, memorizing headings and details they only partially understand. What they do in the classroom, day after day, is what they become experts at.
  2. Social relationships determine learning. It’s very important to remember that much of what students do in the classroom is determined by their social relationships. Even in the teacher’s own territory, the classroom, the student’s primary audience is his or her peers. More communication goes on within the peer culture than within the school and classroom culture.
  3. Effective activities are built around big questions. If we want to design effective learning activities, we must carefully monitor what students are gaining as they engage in focused learning. We have to spend a considerable amount of time and resources monitoring what they are understanding and learning as well as designing and carrying out these activities. Taking the time and providing the resources needed to design effective learning activities means covering much less of the formal curriculum. To justify doing this, we must make sure that the outcomes of these learning activities are significant not only in the official curriculum but also in the lives and interests of the students.
  4. Effective activities are managed by the students themselves. The ideal learning activity, in line with the previous three premises, has the following characteristics:
    • It focuses on the solution of a major question or problem that is significant in both the discipline and the lives and culture of the students;
    • It engages the students continuously in intellectual work that is appropriate in the discipline;
    • It provides teachers with opportunities, as the class engages in solving the smaller problems, to monitor individual students’ evolving understanding of the content and procedures.

There’s some useful insights there.

Gardner Campbell’s Narrate, Curate, Share piece also captures some of the aim. The aim is for students to be engaging in telling and creating their stories of exploration and learning. Not to be blithely fulfilling requirements. This blog post from Gardner further explores his perspective on this which breaks down into (though he does express his fear of the dangers of being too analytical about this)

  1. Distinguish between a requirement and an assignment.
    Blogging is required, but not assigned. Specify how much blogs/comments required (he talks about the dangers and need for a number) but not talk about why or about what. This warns against some of my ideas about being quite specific about marking criteria and purpose. Which has the danger of becoming “new wine in old bottles”.
  2. Encourage/make visible the community-culture continuum, make it accessible to thought.
    In Gardner’s words

    So when I talk to my students about blogging, I try very hard to emphasize how they’re likely to experience both community (tighter bonds with their fellow learners in the course of study) and culture (participation in the greater blogosphere, with unpredictable and often lovely results).

    When I read this, I’m thinking about how I can encourage recognition of and the opportunity to engage in this for these students.

    The culture aspect connects with the “learning-to-be” point above.

  3. Be the change you wish/relate that change.
    Use your own personal experience and example of blogging.

Oh I like this from Gardner’s post

Blogs are hydroponic farms for heuristics, hypothesis-generation, metacognition that continually moves out to other metacognizers and back to one’s own reflection.

which is part of a response to the fear that student blog posts will show the ignorance

To do

read

Kirschner, P., Strijbos, J., Kreijns, K., & Beers, P. J. (2004). Designing electronic collaborative learning environments. Educational Technology Research and Development, 52(3), 47-66.

Doering, A., Miller, C., & Scharber, C. (2012). No such thing as failure, only feedback: Designing innovative opportunities for e-assessment and technology-mediated feedback. Journal of Interactive Learning Research, 21(1), 65–92.

References

Bransford, J., Brown, A., & Cocking, R. (2000). How people learn: brain, mind, experience, and school. Washington, D.C.: National Academy Press.

Larkin, K., Jamieson-Proctor, R., & Finger, G. (2012). TPACK and Pre-Service Teacher Mathematics Education: Defining a Signature Pedagogy for Mathematics Education Using ICT and Based on the Metaphor “Mathematics Is a Language”. Computers in the Schools, 29(1-2), 207–226. doi:10.1080/07380569.2012.651424

Translating Learning into Numbers: A Generic Framework for Learning Analytics

The following is a summary of and commentary on Greller and Drachsler (2012). I come to this via the JISC/CETIS report I summarised yesterday

Thoughts

I liked this paper because it serves a purpose for me. A purpose that I think may well be useful to a number of others. It gives a framework that seems to cover most of the factors to be considered when designing the use of learning analytics (LA). Though I will need some more reflection and experimentation to consider how complete it is. The paper mentions most of the important limitations or questions over LA that are often overlooked and provides recommendations for areas for future research. Importantly, the framework offers a foundation/lens through which to compare and contrast different approaches. All useful features for a new area that has a touch of the fad about it.

Abstract

The abstract of Greller and Drachsler is

With the increase in available educational data, it is expected that Learning Analytics will become a powerful means to inform and support learners, teachers and their institutions in better understanding and predicting personal learning needs and performance. However, the processes and requirements behind the beneficial application of Learning and Knowledge Analytics as well as the consequences for learning and teaching are still far from being understood. In this paper, we explore the key dimensions of Learning Analytics (LA), the critical problem zones, and some potential dangers to the beneficial exploitation of educational data. We propose and discuss a generic design framework that can act as a useful guide for setting up Learning Analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency.

Furthermore, the presented article intends to inform about soft barriers and limitations of Learning Analytics. We identify the required skills and competences that make meaningful use of Learning Analytics data possible to overcome gaps in interpretation literacy among educational stakeholders. We also discuss privacy and ethical issues and suggest ways in which these issues can be addressed through policy guidelines and best practice examples.

Promises lots of relevant reading.

Introduction

Some general background on the the growth of big data.

Argues that electronic data mining makes gather information easy and is resulting in data that is comparable to observational data gathering. Quotes Savage and Burrows (2007) suggesting it will “enhance our understanding and highlight possible inconsistencies between user behaviour and user perception”.

LMS and other systems gather data, but observes (p. 43)

exploitation of the data for learning and teaching is still very limited. These educational datasets offer unused opportunities for the evaluation of learning theories, learner feedback and support, early warning systems, learning technology, and the development of future learning applications.

Critical dimensions of learning analytics

Questions for research from LA

  • Technically-focused
    • compatibility of educational data sets
    • comparability and adequacy of algorithmic and technological approaches
  • “softer” issues – defined as “challenges that depend on assumptions being made about humans or the society in general”
    • data ownership and openness
    • ethical use and dangers of abuse
    • demand for new key competences to interpret and act on LA results

Identify six critical dimensions (soft and hard) as a descriptive framework hoping to later develop it into a domain model or ontology. Deduced from discussions in the research community through a general morphological analysis approach. The framework is intended “to be a guide as much as a descriptor of the problem zones” and is thus labelled a design framework. The framework consists of 6 critical dimensions with each dimension having a number of instantiations. The presented instantiations are not exhaustive. The framework is presented graphically, the following is a tabular representation.

These are critical dimensions since it is proposed that any fully formulated LA design should have at least one instantiation for each dimension. The paper goes on to discuss each dimension in more detail.

Dimensions Instantiations
Stakeholders Institution
Teachers
Learners
Other (researchers, service providers and government agencies)
Internal limitations Competences
Acceptance
External constraints Conventions
Norms
Instruments Technology
Algorithm
Theories
Other
Data Open
Protected
Objectives Reflection
Prediction

The authors present a paper where they illustrate the use of the design framework to describe the SNAPP tool and its use. It is suggested that such a use can be used

  1. As a checklist when designing a purposeful LA process;
  2. as a shareable description framework to compare context parameters with similar approaches in other contexts, or for replication of the scientific environment.

Stakeholders

  • Data clients – beneficiaries of the LA process who are entitled and meant to act upon the outcome.
  • Data subjects – suppliers of the data through their browsing and interaction behaviour.

As shown in the above table, the identified stakeholders are learners, teachers, other and institutions.

I wonder if “institution” is to broad a category. Who is the institution? I think there are perhaps a range of different stakeholders here: academic in charge of a program, head of school/line manager of a group of academics, Dean/Head of faculty, Head of the central L&T division, Head of IT etc. Each of these stakeholders will have slightly different requirements/purposes behind their use of analytics. e.g. an IT manager might want to use analytics to decide whether a particular piece of software should continue to be supported.

Presents a triangle diagram representing a hierarchy of stakeholders and the information flow between them. Emphasises the importance of self-reflection at each level and the possibility of research across the hierarchy. Does mention peer evaluation as another type of information flow (other than hierarchical)

Objectives

“The main opportunities for LA as a domain are to unveil and contextualise so far hidden information out of the educational data and prepare it for the different stakeholders”

  1. Reflection
    The data client reflecting on evaluating their own data to obtain self-knowledge. Linked to the “quantified self”. Can also be reflection based on the datasets of others. e.g. a teacher reflecting on their teaching style. Interesting quote

    Greatest care should however be taken not to confuse objectives and stakeholders in the design of a LA process and not to let, e.g., economic and efficiency considerations on the institutional level dictate pedagogic strategies, as this would possibly lead to industrialisation rather than personalisation.

    and another one

    LA is a support technology for decision making processes. Therein also lies one of the greatest potential dangers. Using statistical analytic findings is a quantitative not a qualitative support agent to such decision making. We are aware that aligning and regulating performance and behaviour of individual teachers or learners against a statistical norm without investigating the reasons for their divergence may strongly stifle innovation, individuality, creativity and experimentation that are so important in driving learning and teaching developments and institutional growth.

  2. Prediction
    Modelling and predicting learner activities can lead to interventions and adaptive services/curricula. The source of much hope for efficiency gains “in terms of establishing acts of automatic decision making for learning paths, thus saving teacher time for more personal interventions”.

    I do wonder how much management see the efficiency gains as a way to reduce teacher time (full stop)?

    Raises the ethical problems – “judgements based on a limited set of parameters could potentially limit a learner’s potential”. Re-confirm established prejudices. Mentions the problem of diversity of learning making it difficult to make judgements between different learners.

    Analaytics itself is seen as pedagogical neutral, but specific technologies are not.

Educational data

LA uses LMS and other data. Institutions already have a lot of this. Linking these can be useful. Most data in institutions is protected. Researchers are finding it hard to get access to data to test their methods.

Anonymisation is seen as one way to get “open data”. Verbert et al (in press) present a review of existing educational datasets. How open data should be required wider debate.

Existing data initiatives include

  • dataTEL challenge – challenge research groups to release data
  • PSLC dataShop open data repository of educational data sets from intelligent tutoring systems.
  • Linked education – open platform to promote the use of data for educational purposes.

Suggests it’s somewhat bizarre that commercial companies automatically assume ownership of user data when you register on their site, but educational institutions operate on the default that everything is protected from virtually everyone. Not sure of this comparison, but it’s a consideration.

The question is which employees of the institution are included in the data contract between a learner and the institution. Suggests this is not yet resolved. Which is a constraint on inner-institutional research and wider institutional use. Other issues with LA datasets

  • Lack of common data formats.
  • Need for version control and a common reference system
  • Methods to anonymise and pre-process data for privacy and legal protection rights. (Drachsler et al, 2010)
  • Standardised documentation of datasets.
  • Data policies to regulate how data sets can be used.
  • The problem that data sets include data that is not context-free and free of error. (e.g. teachers setting up test student accounts).
  • Related to the last point is “enmeshed identities” e.g. where users are sharing a device for access.
  • Perhaps most importantly is the “on-going challenge to formulate indicators from the available datasets that bear relevance for the evaluation of the learning process”

Instruments

LA relies on information retrieval technologies including (refs in original): educational data mining, machine learning, statistical analysis techniques and other techniques such as social network analysis and natural language processing.

They also include conceptual instruments: theoretical constructs, algorithms or weightings. i.e. to some extent these provide ways to “translate” raw data into information

Quotes Hildebrandt (2010) that “invisible biases, based on … assumptions … are inevitably embodied in the algorithms that generate the patterns” and makes the related point “LA designers and developers need to be aware that any algorithm or method they apply is reductive by nature in that it simplifies reality to a manageable set of variables (cf. Verbert et al., 2011).”

External constraints

External constraints broken up into

  • Conventions – ethics, personal privacy and similar socially motivated limitations.
  • Norms – restrictions due to laws or specific mandated policies or standards.

They don’t elaborate on the ethics aspects, leaving that to Bollier (2010) and Siemens (2012). But then get into the following.

Apparently data gathered about a person (before it is anonymised) currently belongs to the owner of the data collection tool and also the data client and beneficiary. The problem is that increasingly information is being stored about individuals without their approval or awareness. The ethical principle of “informed consent” is under threat (AoIR, 2002)

Returns to the problem with institutions have lots of data that is collected about learners, but that it is managed by different groups within the institution and that sharing between these groups is questionable.

Also mentions the ethical issues associated with post-analytic decision making. i.e. the decisions (which can be diverse) taken based on data have ethical implications. Also mention this problem when LA used by institutions to quality assure performance of teaching staff. Similarly the application of such algorithms can limit/hurt innovations that “diverge from the statistical mean”.

Internal limitations

Limitations of a more human origin

  • Competencies
    Survey of LA experts found that only 21% of 111 respondents felt learners would have the competency to interpret LA results and figure out what to do (Drachsler & Greller, 2012). I don’t imagine the percentage for teachers would be much greater?

    Therefore, the optimal exploitation of LA data requires some high level competences in this direction, but interpretative and critical evaluation skills are to-date not a standard competence for the stakeholders, whence it may remain unclear to them what do as a consequence of a LA outcome or visualisation

    Makes the point that the trend toward visualisation of LA outcomes can “delude the data clients away from the full pedagogic reality”. “superficial digestion of data presentations can lead to the wrong conclusions”. Data not included in the LA approach can be equally, if not more, important than what is included. e.g. relying solely on LMS quantitative data.

  • Acceptance
    If there’s no acceptance, any insight can simply be rejected. Needs to be more focus on empirical evaluation methods of LA tools (Ali et al, 2012) and on advanced technology acceptance models (e.g. TAM). Suggests TAM “could be an interesting approach to evaluate the emergent analytic technologies for all stakeholders described in our framework and also the needed implementation requirements to guarantee successful exploitation”

The place of pedagogy in the learning analytics framework

LA promise is to offer new methods and tools to “diagnose learner needs and provide personalised instructions to better address these needs”. But whether it does this or simply clusters people into behaviouristic “learner models” is not yet clear.

More empirical evidence required for which pedagogical theory LA serves best. Pardo & Kloos (2011) give a critical reflection. LA still infers indirectly about active cognitive processes.

Pedagogy is not included in the framework, rather it is implicitly contained in the input datasets used. LA sees the pedagogy through the data. Pedagogy is also explicitly addressed int he goals and objectives of the LA designer.

Conclusion and outlook

All six dimensions need to be considered for LA designs. More work on ethics, data curation and ownership needs to happpen at universities and in legislation to reduce risks associated with application of LA.

Use of the framework would allow scientific comparison, replication, external analysis, and alternative contextualisation.

The obvious next research step: evaluate a number of LA research descriptions and test for consistency in the descriptions.

Interesting

We therefore believe that it will be of critical importance for its acceptance that the development of LA takes a bottom-up approach focused on the interests of the learners as the main driving force.

The relationship between LA and theories of learning, teaching, cognition and knowledge remains open and required more research.

References

AoIR (Association of Internet Researchers) Ethics Working Committee, & Ess, C. (2002). Ethical decision-making and Internet
research: Recommendations from the AoIR Ethics Working Committee. Retrieved from AoIR website:
www.aoir.org/reports/ethics.pdf

Drachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, …Wolpers, M. (2010). Issues and considerations
regarding sharable data sets for recommender systems in technology enhanced learning. Elsevier Procedia Computer Science,
1(2), 2849–2858.

Greller, W., & Drachsler, H. (2012). Translating Learning into Numbers: A Generic Framework for Learning Analytics. Educational Technology & Society, 15(3), 42–57.

Pardo, A., & Kloos, C. D. (2011). Stepping out of the box: Towards analytics outside the learning management system.
Proceedings of the 1st International Conference on Learning Analytics and Knowledge (pp. 163–167). New York, NY: ACM.

Siemens, G. et al. (2012). Learning analytics: Guidelines for ethical use. Shared effort of the learning analytics research
community. Retrieved March 23, 2012, from http://bit.ly/wMDmLW

Analytics for Learning and Teaching

The following is a summary and some reflection upon a CETIS report (Harmelen and Workman, 2012) titled Analytics for Learning and Teaching. One of the reasons for reading this is the current considerations around reworking and resubmitting a unsuccessful OLT application around learning analytics.

This report is the third in a series of reports on analytics produced by jisc/cetis. i.e. some UK folk who have been associated with e-learning/educational technology for some time. I’m only vaguely aware of the details of who and what they are, but they have made useful contributions in the past. Add to this a topic that is of current interest to me and I’m my hopes are high for this.

Summary

I’m disappointed. I had thought this would engage with analytics for “learning and teaching”. Instead it’s a general overview of the institutional requirements/considerations for analytics. An overview that would have worked almost as well for a completely different industry sector. Sure there was a lot of university specific information, but most of it was at a management level. Very little actually dealt with analytics for learning and teaching.

In the conclusion section it’s stated

Most importantly, for learning analytics (as opposed to academic analytics) there is a strong link between pedagogy and analytics.

There’s almost nothing in the report that supports or links to this statement. Just after this quote it says

Any wise institution will examine where it wishes to move to pedagogically, and consider if any proposed learning analytics solution will hinder or enhance its pedagogic development.

A statement that scares me and seems to really represent the top-down, systems based approach taken in the report. Is there anything worse for learning and teaching than for a university to decide on an institutional pedagogical approach?

That said, I’ve got a couple of useful points and some additional literature references to follow up. So not wasted.

Executive summary

Starts by focusing on the use of analytics for learning and teaching and not for the “optimisation of activities around learning and teaching, for example, student recruitment”

Lists the following as exemplars

  • Identify at risk students so positive interventions can be taken.
    Which of course is something that appears to be the focus of analytics projects, but not at a learning and teaching level. It’s central divisions engaged with this.
  • Provide recommendations to students about activities.
  • Detect the need for and measure the results of pedagogic improvements.
  • Tailor course offerings.
    I wonder how much evidence they have of this?
  • Identify teachers who are performing well and those that need help.
    Mmm, wonder if this crosses a line or two.
  • Assist in the student recruitment process.
    Mmm, doesn’t this contradict the point above?

Conclusions the paper draws

  • UK higher ed under uses learning and academic analytics.
  • Risks to the adoption of analytics: measurements that aren’t useful; staff and students lack the knowledge to use analytics.
  • Risk to institutions in delaying introduction of analytics.
    Positioned as the “most compelling risk to amortise” which sounds a bit alarmist and faddish. A bit of FOMO?
  • Institutions vary in analytics readiness and maturity.
  • There are different scales of analytics projects.
  • Small limited-scope analytics projects are a good way to start – these enable institutions to develop staff skills and raise the profile of analytics.
    This perhaps could be an argument for the grant project?
  • There are commercial solutions that will work, but they should be evaluated.

At this stage, this is not necessarily sounding all that promising.

Overview: Analytics in education

Returns to a broader view of analytics “we examine the use of analytics in education with a bias towards providing information that may help decision makers in thinking about analytics in their institutions.”

Do emphasise that “analytics exists as part of a socio-technical system where human decision-making and consequent actions are as much a part of any successful analytics solution as the technical components”.

Varieties of analytics

Quotes to useful definitions

Analytics is “the use of data, statistical analysis, and explanatory and predictive models to gain insights and act on complex issues.” [Bichsel 2012].
Others have a broader view of analytics. Davenport et al [2010] characterise analytics as answering questions that generate both information and insight (table 1)

The table 1 for Davenport follows

Past Present Future
Information What happened? What is happening now? What will happen?
Insight how and why did it happen? What’s the next best action? What’s the best/worst that can happen?

The Bichsel (2012) quote is useful because it combines the range of data, analysis and models. i.e. the inclusion of data. In learning analytics – especially educational data mining – in the fancy , whiz bang algorithms that make great predictions. Simply providing students and staff with access to the data seems to be ignored. This is a topic for another post.

Okay, back to learning analytics

Learning analytics is the application of analytic techniques to analyse educational data, including data about learner and teacher activities, to identify patterns of behaviour and provide actionable information to improve learning and learning-related activities.

Interesting this is back to analysing the data, it doesn’t include just providing the data.

Includes the Long and Siemens table. Repeats the exemplar list from above, including student recruitment.

Positions educational data mining as one of a range of techniques that can be applied to learning and academic analytics.

Has a Google trends image showing how learning analytics has taken off from 2007 on. While academic analytics and educational data mining have stayed constant. This would appear to provide good evidence for “learning analytics” becoming a fashion/fad. i.e. the trend for every university to talk about learning analytics as everything with data, but for researchers to to limit it to just data used for learning and teaching. Some research examining the use of learning analytics over this time might reveal this.

First reference to analytics in education is a 1995 student retention and performance experiment (Sanjeev and Zytkowt, 1995). The 2007 explosion in interest stems for EDUCAUSE.

Anticipated trajectory for analytics in the UK

  1. Some early implementation.
  2. Poised on the verge of rapid growth in late 2012.
  3. Mainstream adoption within two to five years.
  4. Use of LA will provide a differentiator in student learning services.
  5. Catchup by change-adverse institutions.

Definitions

Apparently [van Barneveld et al 2012] is entirely on terminology.

Lists a wide variety of definitions of learning analytics. Some useful contrasts. I like the Diaz and Brown (2012) definition that is explained as 3 goals, the last of which is

Enables interventions and decision making about learning by instructors and students.

This strikes me as the most important. Ferguson’s (2012) segmentation approach based on the question the approach is trying to answer is somewhat similar.

Brown (2011) breaks identifies the following component activities

  • Data collection.
  • Analysis.
  • Student learning
  • Audience.
  • Interventions.

Illustrative examples

Covers four examples

  • Purdue University’s course signals project.
    Over three years: 10% increase in A and B grade, 6.41% decrase in D, F and withdrawals.
  • Desire2Learn analytics.
    One LMS vendors approach to something like Course Signals. However, the point is made here that a single model like that in Course Signals doesn’t capture the full variety within an institution. So Desire2Learn use an ensemble technique with multiple predictive models. Attention is paid to visualisation and providing actionable information.
  • An enrolment prediction system (this is recruitment again).
    Baylor University’s use of SAS Enterprise Miner for “predictive modelling applied to admissions, retention and fundraising”.
  • Recommendation engine for library books.
    OU in the UK’s “Recommendations Improve the Search Experience”. A database of library users and the courses they are doing is used to provide recommendations using the library search facilities.

Uses

Examines various taxonomies/frameworks for understanding how analytics could be used. Most move beyond teaching and learning. Reports on some results of EDUCAUSE survey. Then moves onto look at some specific examples

Sensemaking

i.e. learning analytics as a way to make understand connections, anticipate trajectories and act effectively. Draws on Klein et al (2006) and Siemens (2011). But doesn’t really give examples.

Identifying students at risk

Some approaches (not sure this is a useful distinction).

  • Causal (i.e. correlation exists) models – that identify links between certain data attributes and student performance.
  • Predictive model – using SNA to measure connectedness.

Driving pedagogic change

Multiplicity of uncontrolled variables makes it unlikely that analytics can decide which is the best pedagogy. However, suggested that it can drive pedagogic change:

  • Decide the pedagogy you want to use.
  • Consider how learning analytics can support that pedagogy.
    Given that most e-learning systems are not designed with any particular pedagogy in mind, I wonder how possible this is.

Illustrates this with a table from Greller and Drachsler (2012). Must follow up on this.

Use of activity data

Mostly focuses on resource management. ???Learning and teaching???

Adoption

Describes various success factors and essentially boils down to committed leaders, skilled staff and flexible technology.

Investment

Quotes the RISE Project (68,000 pounds), surveys from the states ($USD400,000 pa.)

Adoption process

A cycle from Siemens?

  • Collection of data guided by purpose.
  • Storage of data
  • Cleaning of data.
  • Integration of data into coherent data sets.
  • Analysis.
  • Reporting and visualisation.
  • Actions enabled.

Presents a general model for institutional analytics with linkages to institutional strategy.

However, this particular point was left out until another section

King et al [2012] describe how it was recognised that departmental acceptance of the early warning system would depend on departments being able to choose performance indicators suited to their needs. Indicators can be chosen by departmental staff; data choices are kept simple, where for each selectable indicator, staff members can see a title, a description and a settable severity level.

Analytics maturity

Reports on Davenport and Harris’ (2007) 5 stages of organisational progress to full adoption of analytics. And Davenport’s et al (2010) five assets and capabiliites for analytics.

I’m getting a little tired of all these neat frameworks and stage models.

Though I do like this sentiment

Again, this is very much the position advanced in this paper, that localised analytics advances awareness and builds experience and readiness to move on to greater levels of analytic maturity.

Human, social and managerial aspects of adoption

Return’s to Davenport again. Mentions the need for people with skills with analytics at numerous levels within the organisation…including “there may also be a need to foster skills within evidence-based pedagogy and the use of analytics to design courses, student interventions and other aspects of learning.”

Build or buy

This old furphy. Does anyone think it’s one or the other these days? You will need to do both.

Includes a table from Campbell and Oblinger (2007) identifying potential data sources. Almost all of which are around a course. Few are actually within it.

Goes on to talk about various commercial systems.

Risks

Develops a list of “several overlapping categories”

  • Risks that inappropriate measurements are revealed through analytics.
  • Analytics specific technical risks and more general technical risks (e.g. system failure).
  • Social and managerial risks (e.g. they don’t support it, or they privilege it over other decision making).
  • Human resources risks, including not having enough a) statistical and analytic capacity, b) implementation resource, c) understanding of the role and applicability of analytics.
  • Risks of students/staff not be able to interpret analytics.
  • Risk of ineffectual interventions.
  • Legal risks (e.g. data protection and FOI)
  • Risk of big brother approach to education
  • Delayed introduction leading to missed opportunities.

References

Harmelen, M. Van, & Workman, D. (2012). Analytics for Learning and Teaching (Vol. 1, pp. 1–40). Bolton.

BIM 2.0 – cleaning up part 3

The third in a series of posts documenting necessary cleaning up of issues with BIM. Getting closer and closer to release of 2.0b

Issues being cleaned up here include

  • #56 – “add an activity” help text
  • #39 – some more consideration of help text for BIM 2.
  • #58 – getting a “do you want to leave” this page message on allocation.
  • #50 – checking the BIM code against the Moodle coding style.

Add an activity help text

In Moodle 2.4 (at least), when you selecting an activity to add the chooser will display a collection of help text for the plugin you’ve just chosen. Need to put some text in for BIM. Question, where is this specified? Trawling the Moodle docs for this might be difficult. Will start with the code of some existing modules.

Mmm, a quick find and grep didn’t reveal what I thought would be there. At least not with assignment, search on forum reveals the modulename_help help string. Provided one of these for BIM, two problems

  1. The help text isn’t appearing.
    Viewing the source of the course page, it appears that the help text is added into this page. It’s not created/called after the add an activity link is clicked. Seems it is being done in modchooser. course/renderer::course_modchooser_module_types is the function that seems to extract it.

    This method is using a set of objects for each module. Those with a help text have a help field. Where is this being set? It’s passed into the original function. It appears to be extracted by course/lib/get_module_metadata. Which is in turn using get_string_manager to get an object that has a method string_exists which is used to see if modulename_help exists. And I can confirm it’s not working for bim.

    So obviously restarting the apache daemon wasn’t sufficient, purge all caches in Moodle and it is working.

  2. Where is the “more help” link coming from.
    The searching above suggests it is looking for an entry in the lang file for modulename_link. But this is only for modules with entries in the moodle docs. i.e. not bim. What do other contrib modules do? For now will ignore this. The guidelines for Moodle docs offers some suggestions for contributed code.

BIM help text

Will move this to a future improvement. Will need to look at setting up a public website (perhaps in Moodle docs) once I have BIM up and running.

Do you want to leave

When a post is allocated to a question, selecting the question generates a form submission. This is now resulting in the browser generating a do you want to leave the page message.

  1. Is this just chrome?
    No, Firefox does it as well.
  2. How is BIM actually doing “form submission” in this case?
    The moodle_form object for allocation posts creates a SELECT element with a onchange=”this.form.submit() attribute.

    It is done this way to avoid the complexity entailed with allowing a marker to allocate different questions at the one time. e.g. Javascript would have to prevent them from allocating more then one post to the one question (of course, this does raise the question of why BIM doesn’t support allocating more than one post to a question, this would be more flexible and potentially useful)

  3. Is this a known problem/feature of Moodle 2.x?
    It appears this may be as a result of this issue i.e. it’s something they’ve explicitly added into Moodle 2.3. to prevent users from navigating away from pages with forms that have changed. Of course, the trouble is that in this case the code is actually doing a submit. Changes will be processed.
  4. Can this be done without generating the message?
    The text of the message is defined in the lang/en/moodle.php file with the string changesmadereallygoaway. This is used in a number of files including lib/formslib.php that has a check
    [sourcecode lang=”php”] if ($form->is_form_change_checker_enabled()) {
    $PAGE->requires->yui_module(‘moodle-core-formchangechecker’,
    ‘M.core_formchangechecker.init’,[/sourcecode] in the startForm method.

    Knowing some of the right terms to search for leads to this and a method to turn it off. ANd that does it.

  5. Is there a better way to achieve the “form submission” on allocate?
    A suggestion from this tracker item suggests

    pure js alternatives (nested elements… actions at distance…).

    Appears this old hacker may need to update some knowledge and modify some code.

The Moodle coding style

Time to check the BIM code against the accepted Moodle code styles.

Automated code checkers

Code-checker is an install into Moodle that generates a web page with errors and warnings. Running it over the bim code finds quite a few, including:

  • use of tab characters;
  • lines with over 180 chars;
  • apparent requirement to start all files with appropriate licence/comments.
  • whitespace at the end of lines;
    A bit of RE magic “1,$s/^s$//g” and/or “1,$s/s$//”
    Would appear to be an artifact of copy and paste.
  • Ending inline comments with punctuation!!
  • Indentation
    in vim, go to start of file and do =G
  • spaces after and before the start/end brackets of a foreach.
  • NULL TRUE FALSE should be lower case.

Now error free.

And now onto moodlecheck which checks the documentation – which should reveal numerous errors/oversights in BIM.

Quite a few. Out-dated usage removed.

BIM 2.0 – Cleaning up issues – Part 2

Building on that last bit of issue cleanup this aims to reduce the list of open BIM issues a bit more. The focus in this part will be

  • #53 issue with question management message.
  • #35 add support for grademax in configure.
  • #40 complete a check on gradebook integration.
  • #37 figure out the missing status on manage marking.
  • #38 relook at how the help icons are produced (bring it into the modern Moodle age)
  • #57 hanging on use of print_table.

Progressed a little further afield than I had expected with this one. But some progress made.

Manage question message

Manage question displays a brief “summary” of what it does. Trouble is when a change is made this message starts the new page and is followed a “this is the change that happened” message. After a short time this page disappears replaced by the traditional page. The message shouldn’t be displayed on the change. Remove it.

Done. Just moving a bit of code to the right place.

Add support for grade max

BIM’s current integration (inherited from BIM 1.0) is primitive. It simply adds up the marks recorded for each question and sticks it in the gradebook. Moodle activities can offer significantly more flexible support for grading. e.g. the forum plugin supports

  • the mean of all ratings.
  • the count of ratings.
  • The highest rating.
  • The minimum rating.
  • The sum of all ratings.

For some of these methods the “maximum grade” plays a role. e.g. for the sum of all ratings, the total cannot exceed the maximum grade.

Currently BIM doesn’t recognise or support the maximum grade. This didn’t cause a problem in Moodle 1.9, but in 2.x it appears to be required. As a first step I need to add support for grade max. More advanced gradebook integration needs to be implemented in the future as part of issues #47 and #3.

As a first step, I looked at some of the other activities to find out how they handle it.

  • forum
    Uses a drop down list to chose a scale that has every number from 1 to 100 as well as “No Grade” and what I believe is an element for each available scale. Grade type can be NONE, VALUE or SCALE. grademax comes into it when it is VALUE.
  • Assignment
    The interface uses a similar drop down box for Grade. An addition is “grading method” which allows direct grading but also more advanced features such as rubrics and marking guide. There’s also a grade category which was present with the forum, category seems to refer to a category in the gradebook.

This seems to be the trend

  1. Add a grade “groupitem” to the configure page for the activity.
  2. Allow a choice of how/if gradebook integration occurs.
  3. Allow a choice of grademax/scale.
  4. Possibly allow support for aggregation type.
  5. And other options – e.g. advanced grading.
  6. Ensure the choices is placed into the database appropriately.

One question I have is what happens when the maximum grade is exceeded? Who does what? Does the activity module code need to decide?

So how do we add these features to the configure page? Forum uses a Moodle provided function – standard_grading_course_module_elements which adds these elements to the form. This bit of documentation suggests that the grade features shown relies on the configuration in MOD_supports.

Tasks to do

  • Can the presence of SCALE be removed from the drop down box?
    Let’s check to see where this SCALE entry is coming from and whether I can add some more. Try the gradebook, ahh view scales. Oops, error from bim 285 of lib.php another record_exists without an array. Fixed.

    So there is the scale that I’ve been seeing in the drop down list. It’s optioned as a standard scale and hence is available site wide. So if I turn that off, it should disappear from BIM? No, because it’s still associated with the course. Even if I delete it from the gradebook for a course, it still shows up. Is this the expected behaviour? Even if I remove at the top level it seems to stick around.

    This is starting to suggest a need to either support the use of scales (which I really can’t see working without some major work, that I do need to do later) or remove them in some way?

    Removing doesn’t seem to be an option. Leave for now.

  • How to update the Grade form elements with the existing values (e.g. if grade chosen to be 90, have 90 show up when configuring the activity)
    It has to be saved in the bim database table. Which means modifying the table, using XMLDB and playing with upgrade.php. At least that’s what I did. It’s working now.
  • Need to remove/modify the old BIM method’s for turning on grading. – DONE
  • Need to update the display of BIM (e.g. the coordinators view) to report the current status of grading.
    Done as part of updating the main coordinator page.

Updating the help

The current approach to generating help icons is old school. The following type of thing
[sourcecode lang=”php”] $help = helpbutton( ‘config_student_reg’, ‘bim_name’, ‘bim’, true, false,
”, true );[/sourcecode]

needs to be replaced with
[sourcecode lang=”php”] $title_cell = $OUTPUT->help_icon( ‘config_student_reg’, ‘bim’ );[/sourcecode]

Need to check all the screens for all the other users

  • Coordinator
    • Configure – DONE.
    • Manage questions. – DONE
    • Allocate markers. – nothing to do
    • Manage marking – apparently DONE
    • Find student – DONE
    • Your students – DONE
  • Marker
  • Student
    This is quite strange. The student page is mostly generated by the same code that is used in parts for the staff. However, while the help popups work for a staff member, they don’t work for a student. Changing to a different them doesn’t effect it. Will leave that as is for now.

Figure out the missing status on manage marking

BIM places a feed item (e.g. a blog post) from a student into a number of different states including: submitted, marked, suspended, released. The manage marking page gives the course coordinator an overview of the number of feed items in each state for each marker. One of the states it displays is Missing. When I was doing some development, I thought I’d identified a bug with the missing status. Need to check this out and be sure its working.

All done. It is working. No problem.

DO a complete check of gradebook integration

  • View BIM as a student
  • Release a post
  • View BIM as a student
    Have noticed that this page shows a total. i.e. how much the activity will be marked out of. This is currently calculated by adding up the maximum values for each of the questions. i.e. not using grademax. This disconnect needs to be fixed up. Will create an issue.
  • View the gradebook as a student.
    It shows the range, but no value or comments (another thing that will need to be fixed. Do I have to release something in gradebook as the teacher? The mark is showing up – well a mark is (9.9) which is interesting. The student got a mark of 5. More things to be explored here.

    This will have to do with the aggregation method. For the course this is set to “Simple weighted mean of grades”

In reading about the gradebook some misc thoughts

  • A BIM activity could be a grade category, with each post being a grade item within that category.
    This appears to allow the use of the gradebook to aggregate the results. It would lead to the removal of showing a progress result from the student view. May have implications for the marking of posts.

When will we no longer teach ICTs to pre-service teachers?

Earlier today I tweeted the following

https://twitter.com/djplaner/status/286266698397540352

It resonated with a few people so I thought I’d share the reference and ramble a bit about the local implications.

The origin

The broader context of the quote from Barton and Haydn (2006, p. 258) is

Kenneth Baker (1988) saw the development of a technologically enabled teaching force as straightforward, explaining to a conference of Education Officers in 1988 that from henceforth, such skills would be built into initial training courses: ‘the problem of getting teachers aware of IT will soon be phased out as all new entrants will soon have IT expertise’

It appears that Kenneth Baker is in fact Baron Baker of Dorking a British Conservative MP who was the Secretary of State for Education and Science from 1986 to 1988. Obviously Baron Baker’s prognostications were a little wayward.

Especially given the Australian Government’s funding last year of the Teaching Teachers for the Future project with the aim of

enabling all pre-service teachers at early, middle and senior levels to become proficient in the use of ICT in education

. Not to mention the fact that I’m currently largely employed to teach a course that is meant to help achieve the same end.

The difference between IT and ICT in education

Though, without knowing exactly the broader context of Baron Baker’s talk it’s easy to draw to broad a conclusion and make a false comparison. @BenjaminHarwood responded to my original tweet with

https://twitter.com/BenjaminHarwood/status/286292977079447552

I wonder if Baron Baker was using IT to mean the ability to turn a computer on and other fundamental skills. 1988 saw Windows 2.10 released in May. So most people we’re still using MS-DOS. The TTF project is focused on the broader “ICT in education”. i.e. @BenjaminHarwood’s “pedagogical integration expertise”.

Will it ever go away

I have to admit to making a claim somewhat similar to Baron Baker’s over the last year. Generally wondering how much longer I’ll be employed to teach “ICT and Pedagogy” as a stand alone course. The though is that we don’t teach “Video and pedagogy” or “Written word and pedagogy” courses, so why are ICTs any different? Won’t the need for a separate course disappear once all the other courses are doing a wonderful job of integrating innovative examples of ICTs and pedagogy?

@palbion had a suggestion, which I think is one of the factors

https://twitter.com/palbion/status/286279318781444096

The on-going change of ICTs does appear to have created some illusion of having to continually re-learn. Even though perhaps some of the fundamentals have stayed the same. But perhaps a large part of that is simply because much use of ICTs and pedagogy has never gotten beyond the substitution/augmentation level as per the SAMR model.

SAMR model image from Jenny Luca’s TPACK and SAMR blog post

While there are many reasons for this lack of movement “up the scale”, much of it would seem to come back to the formal education system and the nature of the organisations that underpin it. A nature that does not really enable nor encourage transformation of operation. Especially not transformation driven by the teaching staff. An inertia that is playing its part within both school systems and institutions of higher education responsible for teaching the next generation of teachers.

@s_palm pointed to the broader “digital native” myth

https://twitter.com/s_palm/status/286271254342811649

So maybe the need will never go away, or perhaps at least not until I reach retirement age or decide to move onto greener pastures.

References

Barton, R., & Haydn, T. (2006). Trainee teachers’ views on what helps them to use information and communication technology effectively in their subject teaching. Journal of Computer Assisted Learning, 22(4), 257–272. doi:10.1111/j.1365-2729.2006.00175.x

BIM 2.0 – cleaning up issues – Part 1

While BIM 2.0 is largely working there remains a list of 30 open issues to be addressed. 19 of these are “future” issues. i.e. changes that would be really nice but aren’t necessary for the immediate release of BIM 2.0. The following is the first part of working on the 11 that are of immediate interest.

Of initial interest will be

  • Ensuring deleting a BIM activity removes all data from the bim tables issue #55
  • Error on releasing a marked post – issue #54

Deleting a BIM activity

The mod/bim/lib.php file has a method forum_delete_instance that is meant to do this. The error is a little obvious

[sourcecode language=”lang="php”]
if ( ! $DB->delete_records( ‘bim_group_allocation’, array(‘id’=>$bim->id))) {
$result = false;
}
[/sourcecode]

It should be looking for the field id. The field in the other tables for the BIM id is ‘bim’. This is actually a problem that appears to extend back to BIM 1.

Changed. Tested. Fixed.

Error releasing a post

This is a problem when gradebook integration is turned on (why it hasn’t shown up previously). The SQL statement used to extract marks to update the gradebook doesn’t work with Moodle 2.x. Update this and it should work.

That seems to have worked. No more problem. The status has been updated. Time to check if the gradebook has been updated appropriately. Yep. Gradebook updated correctly. Can close this one off.

Adding restore to BIM

The following reports on the process to complete the backup/restore functionality for BIM 2.0. Backup is currently working. Time to add the ability to restore those backups. It draws on the process described in this documentation – “Restore 2.0 for developers”. Which doesn’t appear anywhere as detailed/sequential as the backup 2.0 documentation.

This and some simple mistakes on my part meant the following was spread over a couple of days, rather than a hours as I’d hoped. Restore is currently working, at least according to the tests I’ve done.

BIM 2.0 might officially be considered alpha code ready to play with. I still need to work through the issue list on GitHub before it’s truly ready for action.

Back to work tomorrow so BIM development will slow down a bit, though it does remain a priority. Most of the family is heading off on Friday/Saturday, so I might have a bit more time then.

The process

Two main files restore_bim_stepslib.php and restore_bim_activity_task.class.php. Skeleton code is provided. Create those and see if I can figure out what modifications are required.

Confirmed these files are meant to go in the backup directory.

stepslib.php changes

  • replace choice with bim.
  • define_structure – replace choice components (option and answer) with bim components (allocation, question, feed, marking)
    [code lang=”php”] $paths[] = new restore_path_element(‘bim’, ‘/activity/bim’);
    if ( $userinfo ) {
    $paths[] = new restore_path_elements( ‘bim_allocation’,
    ‘/activity/bim/allocations/allocation’
    }
    $paths[] = new restore_path_element(‘bim_question’, ‘/activity/bim/questions/question’); if ($userinfo) {
    $paths[] = new restore_path_element(‘bim_student_feed’, ‘/activity/bim/student_feeds/student_feed’);
    $paths[] = new restore_path_element(‘bim_marking’,
    ‘/activity/bim/markings/marking’); }
    [/code]
  • replace the methods process_choice, process_choice_option, process_choice_answer with bim equivalents
    These are responsible for inserting the data extracted from the XML back into the database. There are three parts of this that need to be explored more

    1. apply_activity_instance in process_choice, would appear to be used to convert the choiceid in the data to the new element inserted into the database. Apparently only called when adding initial information about the activity.
    2. set_mapping appears to be used to convert the activity id for the subsequent elements inserted into the database.
    3. add_related_files – part of a broader area that needs exploration.

    Will need to put in code for the following bim methods, would appear to be a link between the method names and the first parameter in the restore_path_elements calls above.

    • process_bim – done
    • process_bim_allocation – there’s also a need to “get_mappingid” for user and group information
    • process_bim_question – done
    • process_bim_student_feed – done
    • process_bim_marking – done

restore_choice_activity_task.class.php

  • define_decode_contents – identify the elements that need to have the link decoded run. Will stick for just the intro for BIM.
  • define_restore_log_rules – not sure.
  • define_restore_log_rules_for_course – purpose of this not real clear. Apparently choice specific? Leave it out for now.

Running it and failing

The code except on the restore 2.0 page doesn’t seem to work. Tried a backup and restore via Moodle itself. Backup apparently worked and the restore successfully created the new course. However, the BIM activity was not there.

So there is a problem. The limited nature of the documentation makes this an “interesting” problem to solve.

Check the backup file to see if the bim information is there. Yes. It definitely appears that backup is working.

Time to do debug on restore and try and identify where the failure is occurring. Would be nice if the command-line script was working. Having to do this by the Moodle web interface. Will a Google search reveal anything?

The restore picks up the presence of the bim activity in the confirm stage. Including the fact that it has userinfo saved. But on the settings page (i.e. as part of the preparation for restore, it doesn’t show the bim2 activity). This is done by the moodle/backup/restore.php script. Which is implemented using a fairly complex OO structure (compared to 1.9). restore_ui.class.php does much of the work, drawing on other files.

The bim activity is showing up in some aspects at the start of scheme, but there is something that is preventing it from being included when the sections are displayed.

The “Confirm” stage of the restore is showing the BIM activity as part of Section 0. This appears to be extracting information straight from the MBZ (the Moodle backup file) using “backup_general_helper::get_backup_information” and then a renderer to show it.

Idiot!

After much wasted time I have figured out the silly mistake I made and can perhaps start making some progress.

Debugging the completed restore

So it now restores and the BIM activity is present, some problems though

  • There are no questions brought across.
  • Nor marker allocations
  • Nor student feeds, all students are unregistered.

None of the data for the newly restored bim activity is getting restored into the database. There is a new entry in the main bim database, but beyond that nothing.

Essentially only the main configuration stuff has been done correctly. Will need to check

  1. Is this information in the backup? – in short yes, but
    Not sure about the nesting of some of the elements in the backup. It looks wrong
    [sourcecode lang=”xml”]
    <feeds>
    </feeds>
    <feed id="1">
    <bim>1</bim>
    <userid>3</userid>
    <numentries>10</numentries>
    <lastpost>1355868586</lastpost>
    <blogurl>http://davidtjones.wordpress.com</blogurl>
    <feedurl>https://djon.es/blog/feed/</feedurl>
    </feed>
    [/sourcecode]
    Shouldn’t the feed elements be nested within the feeds elements? Need to check this. Create some data in the forum and do a backup again. As expected, it should be nested.

    The building of the tree in backup_bim_stepslib.php is incorrect. Fix that and do another test. Bingo. Now try a restore.

  2. Are there errors in my restore code causing the absence?
    Mmm, now there is a missing method for bim_grade_item_delete in bim/lib.php. Mm, that was commented out. Removed and now onto next error.

    ‘itemid’ cannot be NULL on an insert into mdl_backup_ids_temp. Again, this comes down to a problem with the backup not being done correctly, or some error in the restore. It’s a problem with a bim_question. The backup looks fine.

    The problem is in restore. The individual “process_” methods for each component is not complete. Not setting up the variable $oldid. Will also need to double check the mapping of each field in each of these. But first, does this fix the problem? Nope.

    Need to figure out where this is/isn’t happening. Ahh, a bit of debugging code and run it again and it appears to complete without the error. A good sign I hope.

    Check the database. Nope, nothing being added in the student_feed, questions, allocations tables. Clean up the database and try one more restore prior to re-checking the restore code.

    Okay the problem was with how the bim id was being used in each of the child element methods. i.e. $data->bimid should have been $data->bim. That fixed and much of it is now working. Data is being placed into each of the bim tables from the backup.

    However, the problem now is that some of the necessary connections between tables is not being maintained in a consistent method. For example

    • question field in bim_marking is pointing to the question ids from the backed up bim activity and not the newly restored bim activity.
      So I need to figure out how to get the new id for a question after it has been inserted into the database and update the element for bim_marking on that basis.

      The process_bim_question method does call set_mapping with bim_question and the old and new ids. This would seem to be the mechanism for saving this information. The question now is how to retrieve and use it.

      The discussion forum backup code reveals the expected get_mappingid. And that works.

    • bim_student_feeds is not being updated.
      Due to a problem in the specification of the XML….that’s working.

Final testing

Due to the misc. problems with backup and restore the database is not exactly clean. I need to clean it up and then retest the backup and restore process. This will also provide an opportunity to do another set of tests on the other components of the bim activity.

  1. Delete all the restored courses – DONE
  2. Delete the bim activity on the good course – DONE
  3. Check the bim database tables.
    Check: data left in bim_student_feeds bim_marking bim_questions bim_group_allocation
    This may simply be left overs from prior testing gone wrong. Will need to retest this below.
  4. Create a new bim activity in the good course. – DONE
  5. Allocate some marking. – DONE
  6. Create some questions. – DONE
  7. View the bim activity via a student account – DONE
  8. Register a blog for that student via the coordinator interface – DONE
  9. View the bim activity again as that student – DONE
  10. Register a blog as another student – DONE
  11. Allocate a question or two – DONE
  12. Mark two questions – DONE
  13. Release a question – DONE
    Generates an error to be checked. Added as an issue to github
  14. Do a backup – DONE
  15. Do a restore – DONE
  16. Check the bim database tables – DONE
    REstored bim has id 15. All three questions there. The two feeds are there. Group allocation and marking also seem good.
  17. Compare the restored bim activity with the original. – DONE
  18. Repeat the first few steps to double check deletion from database. – DONE
    Check: Deleting a course does not remove all of the data in the other bim tables (question, group_allocation, feeds, marking)

Will add a few things to the github issues list.

Powered by WordPress & Theme by Anders Norén

css.php