Assembling the heterogeneous elements for (digital) learning

Month: June 2015

And the little one said, "roll over, roll over"

Roll over

It’s that time of year again. The time between semesters when one offering of the main course I teach is still drawing to a close and when I have to think about preparing for the next offering. It’s time – for better or worse – to roll over. The following is a description and some thoughts on how and what I do during roll over.

As it happens there is some discussion going on in the Moodle community about “roll over”, but at the institutional level. The following is very much focused on the course level.

The process

The high level steps I use are:

  1. Use the Moodle/institutional roll over process.
  2. Update my “macros”.
  3. Check for broken links.
  4. Other manual changes

Overall, the tools provided by the institution to roll over a course from one semester to the next do a reasonable job. However, if you have a fair amount of structure and content to a course site there is a hole. The tools don’t yet know enough about the institutional context (dates, weeks). Nor is this information provided in a way that helps teaching staff integrate the information in a way that responds to those changes. The problem gets worse the more you get into the unique course context (e.g. names of weekly topics).

Moodle/institutional roll over

I log an IT request (using a god awful interface) and in a little while some magic happens and the content from Semester 1 (S1) is copied into the Semester 2 (S2) course. The image below (click on it to see it larger) is of the newly rolled over S2 course site. As it shows – for better or worse – the course has quite a collection of resources that make up each weekly learning path (originally called and based on the idea of a weekly ramble). The interconnections between each of the elements is quite convoluted. e.g. the “Introduce yourself” book links to the “Share your introductions” discussion forum and vice versa.

The Moodle/institutional roll over process does a really good job of updating the interconnections between elements. This is good.

But there’s a problem. Can you see the problem in the following image?

s2 2015

Semester 2 runs from late July through to October/November. The dates in the above image are different. They are for Semester 1. The Moodle roll over process doesn’t know anything about USQ dates. This causes problems because dates are littered throughout the course site. Such as the assessment overview page, each individual assignment page, the study schedule, and any of the learning path pages that mention dates.

The suggestion is that I need to manually search (remember there’s typically no search engine on a Moodle course site) and replace all occurrences of dates and any other information that may change from offering to offering. Information that might include:

  • the details of the teaching staff;
    e.g. the assessment information contains directions to “email the course examiner” with extension requests. Who is the course examiner may change from offering to offering.
  • names of weeks/topics; and
    In the above image, week 1 goes under the title “ICT, PLNs and You”. I may wish to change that topic name in Semester 2 (e.g. add in the missing comma “ICT, PLNs, and You”) and the course material uses the topic names throughout.
  • the weeks of the semester for each topic.
    For some historical reason, when it numbers the weeks of semester my institution includes weeks that are holidays in the count. Since holidays are never at the same time each semester, it means for some topics their week of semester changes. In Semester 1, the “Finishing your UoW” topic is in Week 8. In semester 2, it will be Week 6.

“Macros” – my solution to this problem

Last semester I implemented a “macro” system. Initial implementation had it working just for the semester 1 site. Time to update it to work with both sites. The Javascript code will no need to

  1. Identify which course site (S1 or S2) the user is viewing.
  2. Replace all the “macro” variables with values appropriate to the course site.

What I need to do know is enter appropriate information for the new offering, this includes

  • links to books within the course site;

    Assessment, study schedule etc.

  • Assignment due date – course profile.
  • Professional experience dates – PE calendar
  • Weeks and dates – academic calendar.
    This also involves moving the holidays and week numbers around.

With the script updated viewing the S2 course site now reveals the following. Dates and weeks updated.

S2 2015

Broken links

The institution has a “check course” tool that performs a range of checks. Problems it picked up in the newly rolled over course included

  • Cross-course references
  • Reference to missing activity

22 problems were identified, all to be fixed manually. Most were due to the tweaking of paths I did last semester which broke links.

Other manual changes

Teaching team details – need to remove the folk who aren’t teaching in S2. The institutional system for “teaching team” does this automatically, but it has some missing functionality, including

  • No space to add a “Advice on specific request” section.
    i.e. when students are looking for how to contact the teaching team, they often want to ask the same question. A section like this allows some additional advice and scaffolding.
  • No space for “personalisation”.
    The institutional version only offers the standard contact details and qualification information that is present in the institutional database. It doesn’t allow a staff member to personalise their part of the course site. e.g. to explain exactly what they do in the course and more importantly take a small step in creating a relationship with the students.

Week order for study schedule and “jump to” – the change in week numbers for specific topics needs to be manually adjusted. That’s done, but a problem

Build the scaffolds

A number of the activities are based around discussion forums. Students have to complete an activity and share the end result in a forum. Many of these includes example responses (often only seen after the student posts their response). Most of these come from other students in the offering, but a small number are provided by me. I need to manually copy these over from the last offering.

But it appears that the institutional SSO is playing silly buggers. Will have to complete that task later.

Dashboards suck: learning analytics' broken metaphor

I started playing around with what became learning analytics in 2007 or so. Since then every/any time “learning analytics” is mentioned in a university there’s almost an automatic mention of dashboards. So much so I was lead to tweet.

https://twitter.com/djplaner/status/610769509394182144

I’ve always thought dashboards suck. This morning when preparing the slides for this talk on learning analytics I came across an explanation which I think captures my discomfort around dashboards (I do wonder whether I’d heard it somewhere else previously).

What is a dashboard

In the context of an Australian university discussion about learning analytics the phrase “dashboard” is typically mentioned by the folk from the business intelligence unit. The folk responsible for the organisational data warehouse. It might also get a mention from the web guru who’s keen on Google Analytics. In this context a dashboard is typically a collection of colourful charts, often even doing a good job of representing important information.

So what’s not to like?

The broken metaphor

Obviously “analytics” dashboards are a metaphor referencing the type of dashboard we’re familiar with in cars. The problem is that many (most?) of the learning analytics dashboards are conceptualised and designed like the following dashboard.

The problem is that this conceptualisation of dashboards misses the bigger picture. Rather than being thought of like the above dashboard, learning analytics dashboards need to be thought of as like the following dashboard.

Do you see the difference? (and it’s not the ugly, primitive nature of the graphical representation in the second dashboard).

Representation without Affordances and removed from the action

The second dashboard image includes: the accelerator, brake, and clutch pedals; the steering wheel; the indicators; the radio; air conditioning; and all of the other interface elements a driver requires to do something with the information presented in the dashboard. All of the affordances a driver requires to drive a car.

The first dashboard image – like many learning analytics dashboards – provides no affordances for action. The first vision of a dashboard doesn’t actually help you do anything.

What’s worse, the dashboards provided by most data warehouses aren’t even located within the learning environment. You have to enter into another system entirely, find the dashboard, interpret the information presented, translate that into some potential actions, exit the data warehouse, return to the learning environment, translate those potential actions into the affordances of the learning environment.

Picking up on the argument of Don Norman (see quote in image below), the difficulty of this process would seem likely to reduce the chances of any of those potential actions being taken. Especially if we’re talking about (casual) teaching staff working within a large course with limited training, support and tools.

Norman on affordances

Affordances improve learning analytics

Hence, my argument is that the dashboard (Representation) isn’t sufficient. In designing your learning analytics application you need to include the pedals, steering wheel etc (Affordances) if you want to increase the likelihood of that application actually helping improve the quality of learning and teaching. Which tends to suggest that your learning analytics application should be integrated into the learning environment.

Revisiting the IRAC framework and looking for insights

The Moodlemoot’AU 2015 conference is running working groups one of which is looking at assessment analytics. In essence, trying to think about what can be done in the Moodle LMS code to enhance assessment.

As it happens I’m giving a talk during the Moot titled “Four paths for learning analytics: Moving beyond a management fashion”. The aim of the talk is to provide some insights to help people think about the design and evaluation of learning analytics. The working seems like a good opportunity to (at some level) “eat my own dogfood” and fits with my current task of developing the presentation.

As part of getting ready for the presentation, I need to revisit the IRAC framework. A bit of work from 2013 that we’ve neglected, but which (I’m surprised and happy to say) I think holds much more promise than I may have thought. The following explains IRAC and what insights might be drawn from it. A subsequent post will hopefully apply this more directly to the task of Moodle assessment analytics.

(Yes, Col and Damien, I have decided once again to drop the P and stick with IRAC).

The IRAC Framework

Originally developed to “improve the analysis and design of learning analytics tools and interventions” and hopefully be “a tool to aid the mindful implementation of learning analytics” (Jones, Beer, Clark, 2013). The development of the framework drew upon “bodies of literature including Electronic Performance Support Systems (EPSS) (Gery, 1991), the design of cognitive artefacts (Norman, 1993), and Decision Support Systems (Arnott & Pervan, 2005).

This was largely driven by our observation that most of the learning analytics stuff wasn’t that much focused on whether or not it was actually adopted and used, especially by teachers. The EPSS literature was important because an EPSS is meant to embody a “perspective on designing systems that support learning and/or performing” (Hannafin, McCarthy, Hannafin, & Radtke, 2001, p. 658). EPSS are computer-based systems intended to “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63).

Framework is probably not the right label.

IRAC was conceptualised as four questions to ask yourself about the learning analytics tool you were designing or evaluating. As outlined in Jones et al (2013)

The IRAC framework is intended to be applied with a particular context and a particular task in mind. A nuanced appreciation of context is at the heart of mindful innovation with Information Technology (Swanson & Ramiller, 2004). Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved.

Once you’ve got your particular context and task in mind, then you can start thinking about these four questions:

  1. Is all the relevant Information and only the relevant information available?
  2. How does the Representation of the information aid the task being undertaken?
  3. What Affordances for interventions based on the information are provided?
  4. How will and who can Change the information, representation and the affordances?

Interestingly, not long after we’d submitted the paper for reviewing, Siemens (2013) came out and that paper included the following Learning Analytics (LA) Model (LAM) (click on the image to see a larger version). LAM was meant to help move LA from small scale “bottom-up” approaches into a more systemic and institutional approach. The “data team” was given significant emphasis in this.

Siemens (2013) Learning Analytics Model

Hopefully you can see how the Siemens’ LAM and the IRAC framework, at least on the surface, seem to cover much of the same ground. In case you can’t, the following image (click on it to see a larger version) makes that connection explicit.

IRAC and LAM

Gathering insights from IRAC and LAM

The abstract for the Moot presentation promises insights so let’s see what insights you might gain from IRAC. The following is an initial list of potential insights. Insights might be too strong a word. Provocations or hypothesis might be better suited.

  1. An over emphasis on Information.

    When overlaying IRAC onto the LAM the most obvious point for me is the large amount of space in the LAM dedicated to Information. This very large focus on the collection, acquisition, storage, cleaning, integration, and analysis of information is not all that surprising. After all that is what big data and analytics bring to the table. The people who developed the field of learning analytics came to it with an interest in information and its analysis. It’s important stuff. But it’s not sufficient to achieve the ultimate goal of learning analytics, which is captured in the following broadly used definition (emphasis added)

    Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning, and the environments in which it occurs.

    The point of learning analytics is to find out more about learning and the learning environment and change it for the better. That requires action. Action on part of the learner, the teacher, or perhaps the institution or other actors. There’s a long list of literature that strongly argues that simply providing information to people is not sufficient for action.

  2. Most of the information currently available is of limited value.

    In not a few cases, “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (Bollier & Firestone, 2012, p. 14). There have been questions asked about how much the information that is currently captured by LMSes and other systems can actually “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563). Click streams reveal a lot about when and how people traverse e-learning environments, but not why and with what impacts. Beyond that is the problem raised by observations that the use of e-learning by most courses does not make particularly heavy or well-designed use of the learning environment.

  3. Don’t stop at a dashboard (Representation).

    It appears that most people think that if you’ve generated a report or (perhaps worse) a dashboard you have done your job when it comes to learning analytics. This fails on two parts.

    First, these are bad representations. Reports and many dashboards are often pretty crappy at helping people understand what is going on. Worse, these are typically presented outside of the space where the action happens. Breaking the goal of an an information system/EPSS i.e. “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63).

    Second, just providing data in a pretty form is not sufficient. You want people to do something with the information. Otherwise, what’s the point? That’s why you have to consider the affordances question.

  4. Change is never considered.

    At the moment, most “learning analytics” projects involve installing a system, be it stand alone or part of the LMS etc. Once it’s installed it’s all just a better of ensuring people are using it. There’s actually no capacity to change the system or the answers to the I, R, or A questions of IRAC that the system provides. This is a problem on so many levels.

    In the original IRAC paper we mentioned: how development through continuous action cycles involving significant user participation was a core of the theory of decision support systems (Arnott & Pervan, 2005) a pre-cusor to learning analytics; Buckingham-Shum’s (2012) observation that most LA is based on data already being captured by systems and that analysis of that data will perpetuate existing dominant approaches to learning; the problem of gaming once people learn what the system wants. Later we added the task artifact cycle.

    More recently (Macfadyen et al 2014) argue that one of the requirements of learning analytics tools is an integrated and sustained overall refinement procedure allowing reflection” (p. 12).

  5. The more context sensitive the LA is, the more value it has.

    In talking about the use of the SNAPP tool to visualise connections in discussion forums, Lockyer et al (2013) explain that the “interpretation of visualizations also depends heavily on an understanding the context in which the data were collected and the goals of the teacher regarding in-class interaction” (p. 1446). The more you know about the learning context, the better the insight you can draw from learning analytics. An observation that brings the reusability paradox into the picture. Most LA – especially those designed into an LMS – have to be designed to have the potential to be reused across all of the types of institutions that use the LMS. This removes the LMS (and its learning analytics) away from the specifics of the context, which reduces its pedagogical value.

  6. Think hard about providing and enhancing affordances for intervention

    Underpinning the IRAC work is the work of Don Norman (1993), in particular the quote in the image of him below. If LA is all about optimising learning and the learning environment then the LA application has to make it easy for people to engage in activities designed to bring that goal about. If it’s hard, they won’t do it. Meaning all that wonderfully complex algorithmic magic is wasted.

    Macfadyen et al (2014) identify facilitating the deployment of interventions that lead to change to enhance learning as a requirement of learning analytics. Wise (2014) defines learning analytics intervention “as the surrounding frame of activity through which analytics tools, data and reports are taken up and used”. An area of learning analytics that is relatively unexplored (Wise, 2014) and I’ll close with another quote from Wise (2014) which sums up the whole point of the IRAC framework and identifies what I think is the really challenging problem for LA

    If learning analytics are to truly make an impact on teaching and learning and fulfill expectations of revolutionizing education, we need to consider and design for ways in which they will impact the larger activity patterns of instructors and students. (Wise, 2014, 203)

    (and I really do need to revisit the Wise paper).

Norman on affordances

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute. Retrieved from http://india.emc.com/collateral/analyst-reports/10334-ar-promise-peril-of-big-data.pdf

Buckingham Shum, S. (2012). Learning Analytics. Moscow. Retrieved from http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Gery, G. J. (1991). Electronic Performance Support Systems: How and why to remake the workplace through the strategic adoption of technology. Tolland, MA: Gery Performance Press.

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ. Retrieved from http://www.ascilite2012.org/images/custom/lodge,_jason_-_pigeon_pecks.pdf

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49. Retrieved from http://ro.uow.edu.au/medpapers/432/

Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development, 49(2), 57–67.

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1371–1379. doi:10.1177/0002764213498851

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553–583.

Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. In Proceedins of the Fourth International Conference on Learning Analytics And Knowledge – LAK ’14 (pp. 203–211). doi:10.1145/2567574.2567588

Exploring BIM + sentiment analysis – what might it say about student blog posts

The following documents some initial exploration into why, if, and how sentiment analysis might be added to the BIM module for Moodle. BIM is a tool that helps manage and mirror blog posts from individual student blogs. Sentiment analysis is an application of algorithms to identify the sentiment/emotions/polarity of a person/author through their writing and other artefacts. The theory is that sentiment analysis can alert a teacher if a student has written something that is deemed sad, worried, or confused; but also happy, confident etc.

Of course, the promise of analytics-based approaches like this may be oversold. There’s a suggestion that some approaches are wrong 4 out of 10 times. But I’ve seen other suggestions that human beings can be wrong at the same task 3 out of 10 times. So the questions are

  1. Just how hard is it (and what is required) to add some form of sentiment analysis to BIM?
  2. Is there any value in the output?

Some background on sentiment analysis

Tends to assume a negative/positive orientation. i.e. good/bad, like/dislike. The polarity. There are various methods for performing the analysis/opinion mining. There are challenges in analysing text (my focus) alone.

Lots of research going on in this sphere.

Of course there also folk building and some selling stuff. e.g. Indico is one I’ve heard of recently. Of course, they all have their limitations and sweet spots, Indico’s sentiment analysis is apparently good for

Text sequences ranging from 1-10 sentences with clear polarity (reddit, Facebook, etc.)

That is perhaps starting to fall outside what might be expected of blog posts. But may fit with this collection of data. Worth a try in the time I’ve got left.

Quick test of indico

indico provides a REST based API that includes sentiment analysis. Get an API key and you can throw data at it and it will give you a number between 0 (negative) and 1 (positive).

You can even try it out manually. Some quick manual tests

  • “happy great day fantastic” generates the result 0.99998833
  • “terrible sad unhappy bad” generates 0.000027934347704855157
  • “tomorrow is my birthday. Am I sad or happy” generates 0.7929542492644698
  • “tomorrow is my birthday. I am sad” generates 0.2327375924840286
  • “tomorrow is my birthday. I am somewhat happy” 0.8837247819167975
  • “tomorrow is my birthday. I am very happy” 0.993121363266806

With that very ill-informed testing, there are at least some glimmers of hope.

Does it work on blog posts…….actually not that bad. Certainly good enough to play around with some more and as a proof of concept in my constrained circumstances. Of course, indico is by no means the only tool available (e.g. meaningcloud).

But for the purpose of the talk I have to give in a couple of weeks, I should be able to use this to knock up something that works with the more student details script.

Types of e-learning projects and the problem of starvation

The last assignment for the course EDC3100, ICT and Pedagogy was due to be submitted yesterday. Right now the Moodle assignment activity (a version somewhat modified by my institution) is showing that 193 of 318 enrolled students have submitted assignments.

This is a story of the steps I have to take to respond to the story these figures (they’re not as scary as they seem) tell.

It’s also a story about the different types of development projects that are required when it comes to institutional e-learning and how the institutional approach to implementing e-learning means that certain types of these projects are inevitably starved of attention.

Assignment overview

Don’t forget the extensions

193 out of 318 submitted suggests that almost 40% of the students in the course haven’t submitted the final assignment. What this doesn’t show is that a large number of extensions have been granted. Would be nice for that information to appear on the summary shown above. To actually identify the number of extensions that have been granted, I need to

  1. Click on the “View/grade all submissions” link (and wait for a bit).
  2. Select “Download grading worksheet” from a drop down box.
  3. Filter the rows in the worksheet for those rows containing “Extension granted” (sorting won’t work)

This identifies 78 extensions. Suggesting that just under 15% (48) of the students appear to have not submitted on time.

Getting in contact with the non-submits

I like to get in contact with these students to see if there’s any problem. If you want some support for this practice the #1 principle of the “7 Principles of Good Practice for Undergraduate Education” is

1. Encourages Contact Between Students and Faculty
Frequent student-faculty contact in and out of classes is the most important factor in student motivation and involvement. Faculty concern helps students get through rough times and keep on working.

Weather from my bedroom window

Since my students are spread across the world (see the image to the right) and the semester ended last week, a face-to-face chat isn’t going to happen. With 48 students to contact I’m not feeling up to playing phone tag with that number of students. I don’t have easy access to the mobile phone numbers of these students, nor do I have access to any way to send text messages to these students that doesn’t involve the use of my personal phone. An announcement on the course news forum doesn’t provide the type of individual contact I’d prefer and there’s a question about how many students would actually see such an announcement (semester ended last week).

This leaves email as the method I use. The next challenge is getting the email addresses of the students who haven’t submitted AND don’t have extensions.

The Moodle assignment activity provides a range of ways to filter the list of all the students. One of those filters is “Not submitted”. The problem with this filter is that there’s no way (I can see) to exclude those that have been given an extension. In this case, that means I get a list of 126 students. I need to ignore 78 of these and grab the email addresses of 48 of them.

Doing this manually would just be silly. Hence I save the web pages produced by the Moodle assignment activity onto my laptop and run a Perl script that I’ve written which parses the content and displays the names and email addresses of the students without extensions.

Another approach would have been to use the grading worksheet (a CSV file) I used above. But I’ve gone down the HTML parsing route because I’ve already got a collection of Perl scripts parsing Moodle HTML files due to a range of other screen scraping tasks I’ve been doing for other reasons.

Excluding the won’t submits

I now have the list of students who haven’t submitted and don’t have extensions. But wait, there’s more. There are also some students I know who will, for a variety of reasons, never submit. If possible, I’d prefer not to annoy them by sending them an email about them not submitting Assignment 3.

This information is not in any database. It’s mostly a collection of email messages from various sources stored in the massive 2Gb of space the institution provides for email. I have to manually search through those to find the “won’t submits”.

Send the email

Now it’s time to send the email. In a perfect world I would like to send a personalised email message. A message that includes the student’s name and perhaps other details about their participation in the course. Moodle doesn’t appear to provide an email merge facility. In theory Office provides some functionality this way but I use a Mac and the Office stuff never seems to work easily on the Mac (and I’m biased against Office).

So I don’t send out a personalised email. Just the one email message to these specific students but with only generic content. Many still appear to appreciate the practice. For example, this is a response from one of the students who received one of these emails for a prior assignment (emphasis added)

Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.

Some questions

Which makes me wonder how many teaching staff do something like this? Why/why not?

Of the staff who don’t do this, is that because

  1. They don’t think it’s important or appropriate for their course?
  2. They’ve never thought of doing it?
  3. It’s too difficult to do?
Norman on affordances

And, if it were easier, would they do it? What impact might this have?

Moodle is an open source project used by huge numbers of institutions across the world. In addition, over the last year or so my institution has spent some time customising the Moodle assignment activity. I’m fairly certain that I’m not the first person using Moodle to have wanted to contact students who haven’t submitted.

So why all the limitations in the affordances of the Assignment activity?

Types of e-learning projects

In some discussions with @beerc and @damoclarky we’ve identified five separate types of e-learning projects that an institution faces.

  1. External/system driven projects.

    Projects that have to be done because of changes in the external environment. e.g. there’s a new version of Moodle and we need to roll that out or we’ll fall behind and be running a non-supported version.

  2. Strategic projects approved by the institution.

    The institution has decided that a project is important and should be funded and used by the entire institution. e.g. the decision my institution made to modify the Moodle assignment activity in order to transition from a locally built system and not lose functionality.

    Note: there’s a line here between these projects and those below. Typically projects above this line are those that will be used by all (or perhaps most) of the institution.

  3. Projects that might scale, but waiting for them to happen creates problems.

    This is where parts of the institution recognise that there is a problem/need (e.g. the story above) that might scale to the entire institution. But the problem/need has not yet made it across the line above into a strategic project. Meaning that there is a period of time when people know they want to do something, but can’t. They have to wait for the scarce resources of the institution to be allocated.

    In these situations, a few people don’t wait. They develop workarounds like the above. If the need is particularly important, everyone develops their workarounds. Leading to large inefficiencies as the solution is re-created in numerous different ways.

  4. Projects that will only ever be of interest to a program or a particular set of courses.

    For example, all the courses in the Bachelor of Education might benefit from a single page application lesson template that is integrated with the Australian Curriculum. This isn’t something that any other set of courses is going to desire. But it’s possibly of great importance to the courses that do.

  5. Course or pedagogical design specific projects.

    These are projects that are specific to a particular pedagogical design. Perhaps unique to a single course. e.g. the “more student details” Greasemonkey script (see more recent screenshot below) that I’ve implemented for EDC3100. The pedagogical design for this course makes use of both Moodle’s activity completion facility and the BIM module.

    I’m willing to bet large amounts of money that my course is currently the only course that uses this particular combination. This specific version of the tool is unlikely to be valuable to other people. It won’t scale (though the principles behind it might). There’s no point in trying to scale this tool, but it provides real benefit to me and the students in my course.

MoreStudentDetails

The problem of starvation

If you ask any University IT Director they will complain about the fact that they don’t have sufficient resources to keep existing systems running and effectively deal with project types #1 and #2 from the list above. The news is even worse for project types 3, 4 and 5.

#5 projects never get implemented at the institutional level. They only ever get done by “Freds-in-the-shed” like me.

#4 projects might get implemented at the institutional level, but typically only if the group of courses the project is for has the funds. If your degree has small numbers, then you’re probably going to have to do it yourself.

#3 projects might get implemented at the institutional level. But that does depend on the institution become aware of and recognising the importance of the project. This can take a loooong time, if at all. Especially if the problem requires changes to a system used by other institutions. If it’s a commercial system it may never. But even with an open source system (like Moodle) it can take years. For example, Costello (2014) says the following about a problem with the Quiz system in Moodle (p. 2)

Despite the community reporting the issue to the Moodle developers, giving multiple votes for its resolution and the proposal of a fix, the issue had nonetheless languished for years unfixed.

and (p. 1)

Applying this patch to DCU’s own copy of Moodle was not an option for us however, as the University operated a strict policy of not allowing modifications, or local customisations, to its Moodle installation. Tinkering with a critical piece of the institutional infrastructure, even to fix a problem, was not an option.

Ramifications

I suggest that there are at least four broad results of this starvation of project types 3, 4 and 5

  1. The quality of e-learning is constrained.

    Without the ability to implement projects specific to their context, people bumble along with systems that are inappropriate. The systems limit the quality of e-learning.

  2. Feral or shadow systems are developed.

    The small number of people who can, develop their own solutions to these projects.

  3. Wasted time.

    The people developing these solutions are typically completing tasks outside their normal responsibilities. They are wasting their time. In addition, because these systems are designed for their own particular contexts it is difficult to share them with other people who may wish to use them. Either because other people don’t know that they exist, or because they use a unique combination of technologies/practices no-one else uses. This is a particular problem for project types 3 and 4.

  4. Lost opportunity for innovation and strategic advantage.

    Some of these project types have the potential to be of strategic advantage, but due to their feral nature they are never known about or can’t be easily shared.

So what?

My argument is that if institutions want to radically improve the quality of their e-learning, then they have to find ways to increase the capacity of the organisation to support all five project types. It’s necessary to recognise that supporting all five project types can’t be done by using existing processes, technologies and organisational structures.

Responses

I sent the email to the students who hadn’t submitted Assignment 3 at 8:51 this morning. It’s just over two hours later. In that time, 8 of the 36 students have responded.

References

Costello, E. (2014). Participatory Practices in Open Source Educational Software : The Case of the Moodle Bug Tracker Community. University of Dublin. Retrieved from http://www.tara.tcd.ie/handle/2262/71751

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Import/export ePubs into the Moodle book module

One of the likely aims of the Moodle “open” book project that interested me was the ability to export Book modules to ePub format. A capability I think would be seen as valuable for students in my course, given the heavy use of the book module. In developing the Moodlemoot’AU 2015 session on the project this was certainly I’ve been trawling the Moodle community forums for discussions about the Book module. This type of function has been mentioned as desirable by others.

Then last night I stumble across this post from Saturday in which someone has done it. Written some add-ons that will not only export in the ePub format, but also allow the import of ePub formats. Some discussions in tracker about this that I may wish to look at again later. What follows is some initial experiments with that code.

In summary, looks like it works very well and is very promising. The biggest problem is that it is very finicky about the HTML. Generating big red error boxes when the HTML (which works in the browser) doesn’t meet it’s standards. Could be very useful.

Wonder how difficult it would be in getting it installed on the institutional Moodle?

Export

I’m going to start with the export tool as that’s my main interest. Instructions here

  1. Download the zip file.
  2. Install view Site Admin / Plugins…

    Being added as a “Tool” within the book. note: perhaps a model for other functions?

    No problems.

  3. Test it with some of the EDC3100 books.

    Head into one of the EDC3100 books. And there is a new link under the “Book administration” menu – Download as ebook.

    Click on that link and an ePub file gets downloaded and opened in iBooks.

    Errors coming in terms of rendering. I’m assuming at the moment due to some of the HTML I’ve included.

    Error

    Yep, the problem is a bit of poor HTML. Appears that the ePub export tool is quite a bit stricter about HTML than a browser. Suggestion: meaning that some sort of warning about HTML issues with book chapters might be useful.

A copy of the epub file produced during testing can be seen here, including a range of errors generated by less than stellar HTML.

Some points to ponder:

  • The reliance on strict HTML is going to be a barrier to many, more support from the tools is required to reduce this barrier.

    Some ideas of the type of support might include:

    • Some way for the book module to analyse and highlight (or fix) problem HTML prior to export to ePub.
    • An option for the ePub export tool not to show the quite obvious errors.

      Some of the errors don’t appear to impact display. It’s the big red error box that impacts display. Being able to turn this off might be useful.

  • Some thought will need to be given to video and other non-book content.

    Page 28 (in the ePub) I produced above has a YouTube video to play when viewed in Moodle. In the ePub there is nothing, no link, no image etc suggesting that the video should go there.

    The ePub does show links and other HTML, but not YouTube embeds. Automatically including something suggesting and linking to the video would be useful.

  • Missing ToC

    A feature of the Moodle book in both it’s online and print forms is that it produces a table of contents that helps navigation. The ePub export isn’t producing a ToC. Could be useful.

  • Bundling up multiple books into an ePub.

    As mentioned in the presentation abstract my course has 70+ books with 670+ chapters. One of the aims of the ePub export tool is to allow students to have offline copies of the books. At the moment the ePub export tool only works one book at a time. Meaning students would have to export each of the 70+ books individually.

    A tool that allows exporting collections of books to ePub – perhaps as a single ePub or as separate ePubs would be useful. e.g. the ability to export all the books in a topic (or a course) into a single ePub.

    This is the same sort of tool that would be useful if a book search tool was implemented. The ability to search within a single book, all books in a topic, or all books in a course.

  • Confusion over “page” numbers.

    I can see the student questions now.

    I’m having problems understanding the last sentence on page 52 of the PKM and reflection book. Help!!!

    While the student has been reading the ePub (hence page 52), I’m more than likely to be only using the Moodle based versions of the book. Creating some confusion as we have to translate back and forth between page numbers.

  • Usability of the ePub format.

    I don’t have any real experience using the ePub format. So I wonder how useful it will be for students. i.e. how many have software that can read iBooks on the various devices they want to use?

Reading – Embracing Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge

The following is a summary and ad hoc thoughts on Macfadyen et al (2014).

There’s much to like in the paper. But the basic premise I see in the paper is that to fix the problems of the current inappropriate teleological processes used in institutional strategic planning and policy setting is an enhanced/adaptive teleological process. The impression I take from the paper is that it’s still missing the need for institutional to be enabling actors within institutions to integrate greater use of ateleological processes (see Clegg, 2002). Of course, Clegg goes onto do the obvious and develop a “dialectical approach to strategy” that merges the two extremes.

Is my characterisation of the adaptive models presented here appropriate?

I can see very strong connections with the arguments made in this paper between institutions and learning analytics and the reasons why I think e-learning is a bit like teenage sex.

But given the problems with “e-learning” (i.e. most of it isn’t much good in pedagogical terms) what does that say about the claim that we’re in an age of “big data” in education. If the pedagogy of most e-learning is questionable, is the data being gathered any use?

Conflating “piecemeal” and “implementation of new tools”

The abstract argues that there must be a shift “from assessment-for-accountability to assessment-for-learning” and suggests that it won’t be achieved “through piecemeal implementation of new tools”.

It seems to me that this is conflating two separate ideas, they are

  1. piecemeal; and,

    i.e. unsystematic or partial measures. It can’t happen bit-by-bit, instead it has to proceed at the whole of institutional level. This is the necessary step in the argument that institutional change is (or must be) involved.

    One of the problems I have with this is that if you are thinking of educational institutionals as complex adaptive systems, then that means they are the type of system where a small (i.e. piecemeal change) could potentially (but not always) have a large impact. In a complex system, a few very small well directed changes may have a large impact. Or alternatively and picking up on ideas I’ve heard from Dave Snowden, implementing large amounts of very small projects and observing the outcomes may be the only effective way forward. By definition a complex system is one where being anything but piecemeal may be an exercise in futility. As you can never understand a complex system, let alone being able to guess the likely impacts of proposed changes..

    The paper argues that systems of any type are stable and resistant to change. There’s support for this argument. I need to look for dissenting voices and evaluate.

  2. implementation of new tools.

    i.e. the build it and they will come approach won’t work. Which I think is the real problem and is indicative of the sort of simplistic planning processes that the paper argues against.

These are two very different ideas. I’d also argue that while these alone won’t enable the change, they are both necessary for the change. I’d also argue that institutional change (by itself) is also unlikely to to achieve the type of cultural change required. The argument presented in seeking to explain “Why e-learning is a bit like teenage sex” is essentially this. That institutional attempts to enable and encourage changed in learning practice toward e-learning fail because they are too focused on institutional concerns (large scale strategic change) and not enough on enabling elements of piecemeal growth (i.e. bricolage).

The Reusability Paradox and “at scale”

I also wonder about considerations raised by the reusability paradox in connection with statements like (emphasis added) “learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale“. Can the “smart algorithms” of LA marry the opposite ends of the spectrum – pedagogical value and large scale reuse? Can the adaptive planning models bridge that gap?

Abstract

In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self–regulated learning skills, and student success. How- ever, to realize this promise, the necessary shifts in the culture, techno- logical infrastructure, and teaching practices of higher education, from assessment–for–accountability to assessment–for–learning, cannot be achieved through piecemeal implementation of new tools. We propose here that the challenge of successful institutional change for learning analytics implementation is a wicked problem that calls for new adaptive forms of leadership, collaboration, policy development and strategic planning. Higher education institutions are best viewed as complex systems underpinned by policy, and we introduce two policy and planning frameworks developed for complex systems that may offer institutional teams practical guidance in their project of optimizing their educational systems with learning analytics.

Introduction

First para is a summary of all the arguments for learning analytics

  • awash in data (I’m questioning)
  • now have algorithms/methods that can extract useful stuff from the data
  • using these methods can help make sense of complex environments
  • education is increasingly complex – increasing learner diversity, reducing funding, increasing focus on quality and accountability, increasing competition
  • it’s no longer an option to use the data

It also includes a quote from a consulting company promoting FOMO/falling behind if you don’t use it. I wonder how many different fads they’ve said that about?

Second para explains what the article is about – “new adaptive policy and planning approaches….comprehensive development and implementation of policies to address LA challenges of learning design, leadership, institutional culture, data access and security, data privacy and ethical dilemmas, technology infrastructure, and a demonstrable gap in institutional LA skills and capacity”.

But based on the idea of Universities as complex adaptive systems. That “simplistic approaches to policy development are doomed to fail”.

Assessment practices: A wicked problem in a complex system

Assessment is important. Demonstrates impact – positive and negative – of policy. Assessment still seen too much as focused on accountability and not for learning. Diversity of stakeholders and concerns around assessment make substantial change hard.

“Assessment practice will continue to be intricately intertwined both with learning
and with program accreditation and accountability measures.” (p. 18). NCLB used as an example of the problems this creates and mentions Goodhart’s law.

Picks up the on-going focus on “high-stakes snapshot testing” to provide comparative data. Mentions

Wall, Hursh and Rodgers (2014) have argued, on the other hand, that the perception that students, parents and educational leaders can only obtain useful comparative information about learning from systematized assessment is a false one.

But also suggests that learning analytics may offer a better approach – citing (Wiliam, 2010).

Identifies the need to improve assessement practices at the course level. Various references.

Touches on the difficulties in making these changes. Mentions wicked problems and touches on complex systems

As with all complex systems, even a subtle change may be perceived as difficult, and be resisted (Head & Alford, 2013).

But doesn’t pick up the alternate possibility that a subtle change that might not be seen as difficult could have large ramifications.

Learning analytics and assessment-for-learning

This paper is part of a special issue on LA and assessment. Mentions other papers that have show the contribution LA can make to assessment.

Analytics can add distinct value to teaching and learning practice by providing greater insight into the student learning process to identify the impact of curriculum and learning strategies, while at the same time facilitating individual learner progress (p. 19)

The argument is that LA can help both assessment tasks: quality assurance, and learning improvement.

Technological components of the educational system and support of LA

The assumption is that there is a technological foundation for – storing, managing, visualising and processing big educational data. Need for more than just the LMS. Need to mix it all up and this “institutions are recognizing the need to re–assess the concept of teaching and learning space to encompass both physical and virtual locations, and adapt learning experiences to this new context (Thomas, 2010)” (p. 20) Add to that the rise of multiple devices etc.

Identifies the following requirements for LA tools (p. 21) – emphasis added

  1. Diverse and flexible data collection schemes: Tools need to adapt to increasing data sources, distributed in location, different in scope, and hosted in any platform.
  2. Simple connection with institutional objectives at different levels: information needs to be understood by stakeholders with no extra effort. Upper management needs insight connected with different organizational aspects than an educator. User–guided design is of the utmost importance in this area.
  3. Simple deployment of effective interventions, and an integrated and sustained overall refinement procedure allowing reflection

Some nice overlaps with the IRAC framework here.

It does raise interesting questions about what are institutional objectives? Even more importantly how easy it is or isn’t to identify what those are and what they mean at the various levels of the institution.

Interventions An inset talks about the sociotechnical infrastructure for LA. It mentions the requirement for interventions. (p. 21)

The third requirement for technology supporting learning analytics is that it can facilitate the deployment of so–called interventions, where intervention may mean any change or personalization introduced in the environment to support student success, and its relevance with respect to the context. This context may range from generic institutional policies, to pedagogical strategy in a course. Interventions at the level of institution have been already studied and deployed to address retention, attrition or graduation rate problems (Ferguson, 2012; Fritz, 2011; Tanes, Arnold, King, & Remnet, 2011). More comprehensive frameworks that widen the scope of interventions and adopt a more formal approach have been recently proposed, but much research is still needed in this area (Wise, 2014).

And then this (pp. 21-22) which contains numerous potential implications (emphasis added)

Educational institutions need technological solutions that are deployed in a context of continuous change, with an increasing variety of data sources, that convey the advantages in a simple way to stakeholders, and allow a connection with the underpinning pedagogical strategies.

But what happens when the pedagogical strategies are very, very limited?

Then makes this point as a segue into the next section (p. 22)

Foremost among these is the question of access to data, which needs must be widespread and open. Careful policy development is also necessary to ensure that assessment and analytics plans reflect the institution’s vision for teaching and strategic needs (and are not simply being embraced in a panic to be seen to be doing something with data), and that LA tools and approaches are embraced as a means of engaging stakeholders in discussion and facilitating change rather than as tools for measuring performance or the status quo.

The challenge: Bringing about institutional change in complex systems

“the real challenges of implementation are significant” (p. 22). The above identifies “only two of the several and interconnected socio-technical domains that need to be addressed by comprehensive institutional policy and strategic planning”

  1. influencing stakeholder understanding of assessment in education
  2. developing the necessary institutional technological infrastructure to support the undertaking

And this has to be done whilst attending to business as usual.

Hence not surprising that education lags other sectors in adoption analytics. Identifies barriers

  • lack of practical, technical and financial capacity to mind big data

    A statement from the consulting firm who also just happens to be in the market of selling services to help.

  • perceived need for expensive tools

Cites various studies showing education institutions stuck at gathering and basic reporting.

And of course even if you get it right…

There is recognition that even where technological competence and data exist, simple presentation of the facts (the potential power of analytics), no matter how accurate and authoritative, may not be enough to overcome institutional resistance (Macfadyen & Dawson, 2012; Young & Mendizabal, 2009).

Why policy matters for LA

Starts with establishing higher education institutions as a “superb example of complex adaptive systems” but then suggests that (p. 22)

policies are the critical driving forces that underpin complex and systemic institutional problems (Corvalán et al., 1999) and that shape perceptions of the nature of the problem(s) and acceptable solutions.

I struggle a bit with that observation and even more with this argument (p. 22)

we argue that it is therefore only through implementation of planning processes driven by new policies that institutional change can come about.

Expands on the notion of CAS and wicked problems. Makes this interesting point

Like all complex systems, educational systems are very stable, and resistant to change. They are resilient in the face of perturbation, and exist far from equilibrium, requiring a constant input of energy to maintain system organization (see Capra, 1996). As a result, and in spite of being organizations whose business is research and education, simple provision of new information to leaders and stakeholders is typically insufficient to bring about systemic institutional change.

Now talks about the problems more specific to LA and the “lack of data-driven mind-set” from senior management. Links this to earlier example of institutional research to inform institutional change (McINtosh, 1979) and links to a paper by Ferguson applying those findings to LA, from there and other places factors identified include

  • academics don’t want to act on findings from other disciplines;
  • disagreements over qualitative vs quantitative approaches;
  • researchers & decision makers speak different languages;
  • lack of familiarity with statistical methods
  • data not presented/explained to decision makers well enough.
  • researchers tend to hedge an dquality conclusions.
  • valorized education/faculty autonomy and resisted any administrative efforts perceived to interfere with T&L practice

Social marketing and change management is drawn upon to suggest that “social and cultural change” isn’t brought about by simply by giving access to data – “scientific analyses and technical rationality are insufficient mechanisms for understanding and solving complex problems” (p. 23). Returns to

what is needed are comprehensive policy and planning frameworks to address not simply the perceived shortfalls in technological tools and data management, but the cultural and capacity gaps that are the true strategic issues (Norris & Baer, 2013).

Policy and planning approaches for wicked problems in complex systems

Sets about defining policy. Includes this which resonates with me

Contemporary critics from the planning and design fields argue, however, that these classic, top–down, expert–driven (and mostly corporate) policy and planning models are based on a poor and homogenous representation of social systems mismatched with our contemporary pluralistic societies, and that implementation of such simplistic policy and planning models undermines chances of success (for review, see Head & Alford, 2013).

Draws on wicked problem literature to expand on this. Then onto systems theory.

And this is where the argument about piecemeal growth being insufficient arises (p. 24)

These observations not only illuminate why piecemeal attempts to effect change in educational systems are typically ineffective, but also explains why no one–size–fits–all prescriptive approach to policy and strategy development for educational change is available or even possible.

and perhaps more interestingly

Usable policy frameworks will not be those which offer a to do list of, for example, steps in learning analytics implementation. Instead, successful frameworks will be those which guide leaders and participants in exploring and understanding the structures and many interrelationships within their own complex system, and identifying points where intervention in their own system will be necessary in order to bring about change

One thought is whether or not this idea is a view that strikes “management” as “researchers hedging their bets”? Mentioned above as a problem above.

Moves onto talking “adaptive management strategies” (Head and Alford, 2013) which offer new means for policy and planning that “can allow institutions to respond flexibly to ever-changing social and institutional contexts and challenges” which talk about

  • role of cross-institutional collaboration
  • new forms of leadership
  • development of enabling structures and processes (budgeting, finance, HR etc)

Interesting that notions of technology don’t get a mention.

Two “sample policy and planning models” are discussed.

  1. Rapid Outcome Mapping Approach (ROMA) – from international development

    “focused on evidence-based policy change”. An iterative model. I wonder about this

    Importantly, the ROMA process begins with a systematic effort at mapping institutional context (for which these authors offer a range of tools and frameworks) – the people, political structures, policies, institutions and processes that may help or hinder change.

    Perhaps a step up, but isn’t this still big up front design? Assumes you can do this? But then some is better than none?

    Apparently this approach is used more in Ferguson et al (2014)

  2. “cause-effect framework” – DPSEEA framework

    Driving fource, Pressure, State, Exposure, Effect (DPSEEA) a way of identifying linkages between forces underpinning complex systems.

Ferguson et al (2014) apparently map “apparently successful institutional policy and planning processes have pursued change management approaches that map well to such frameworks”. So not yet informed by? Of course, there’s always the question of the people driving those systems reporting on their work?

I do like this quote (p. 25)

To paraphrase Head and Alford (2013), when it comes to wicked problems in complex systems, there is no one– size–fits–all policy solution, and there is no plan that is not provisional.

References

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Analysing Moodle community forum discussions about the Moodle book module

As part of the “Moodle open book” project I’m hoping to increase my knowledge of what the Moodle community has already discussed about the Book module. The following is a summary of the process I’m using to analyse those discussions.

Not finished, but the story so far. Just over 2400 posts extracted from Moodle community forums that appear to mention “Book module”. About 250 posts (very roughly) coded so far. The following is a very early summary of the features discussed in those posts is

  • 43 – navigation and interface
  • 33 – export and import
  • 15 – printing
  • 13 – integrating activities (mostly quizzes) into the midst of the book.
  • 6 – page visibility
  • 3 – version control

Though a little interesting, I wouldn’t read to much into those figures yet. There are some more statistics on the 2400+ posts below.

Obtain the data

The process for obtaining the data was

  1. Global search for “book module”.

    Use the “Search forum” functionality in the “Moodle in English” community to search for posts that mentioned “book module”. This gave 144 pages of forum posts. These were than saved to my laptop.

  2. Get all the posts from the Book module forum.

    Got a copy of all the forum posts to Book module forum

Parse the data

Need to write a Perl script that will extract that information from the HTML files.

The potentially useful data in this set includes

  • Post
    • the subject line for the post (parsed)
    • body of the post (parsed)
    • date string when posted (parsed)
  • Forum
    • link (parsed)
    • name (parsed)
  • Author
    • User id
    • Author name (parsed)
    • link to their profile (parsed)

Stick it in (a) database table(s)

Next step is to have the script stick it all in a database table to ensure that there are no duplicates. moodle_book_forum_posts

That appears to be working. Now to try and get it all the forum posts inserted.

Done, some quick stats from SQL

  1. 2442 forum posts
  2. 870 authors
  3. 146, 71, 41, 41, 41, 41, 36 – the number of posts (in descending order) by the most prolific authors.
  4. the posts are from 40 forums.
    As you would expect, most in the book forum.

    • Book – 1774 posts
    • General help – 143
    • General developer – 86
    • Themes – 46
    • General plugins – 38
    • Gradebook – 37

    The presence of the gradebook forum potentially points to the biggest flaw with the data so far. i.e. search for “book module” my return posts that include “gradebook module” or similar. This will get ironed out in the later analysis.

  5. Content analysis – into NVivo

    The plan is to use NVivo to do a content analysis on the posts. The aim is to to identify the nature of the posts about the Book module. i.e. are the posts how to use, bug reports, feature requests etc. As part of that what types of features have been requested and when.

    The plan was to import the data from the database, but apparently the Mac version of NVivo cannot import data from a database. Meaning I need to go via a spreadsheet/CSV file.

    Sadly, Nvivo seems a little constrained. e.g. you can’t add to or change a dataset.

    But at least Perl and WriteExcel provide some flexibility.

    Of course, it appears that I have to load the Excel file produced by Perl into Excel and then save it from Excel before NVivo will import it properly.

    Initial analysis with NVivo

    First run through I think I’ll use these nodes

    • Book or NotBook – to indicate whether a post is related to the book module.
    • NewFeature – indicate something to do with new feature
      • Request – asking for a new feature
      • Announce – Announce a new feature
    • Bug – indicate a bug has been identified
      • Request – asking for help with a bug
      • Announce – announcing a fix for a bug
    • Help – getting help with using book
      • Request – asking for help
      • Announce – answering please for help

    Each of the book related nodes will have nodes indicating what is being helped with e.g. export, import, navigation, authoring, permissions, display. Wonder if there’s a list of these already.

    It’s taking a while to do this coding. Pity about the absence of decent keyboard shortcuts in NVivo.

    Will probably need to revisit these categories. Such as there are a few categories where the distinction is questionable – e.g. export/print, bug/new feature

The four paths for implementing learning analytics and enhancing the quality of learning and teaching

The following is a place holder for two presentations that are related. They are:

  1. “Four paths for learning analytics: Moving beyond a management fashion”; and,

    An extension of Beer et al (2014) (e.g. there are four paths now, rather than three) that’s been accepted to Moodlemoot’AU 2015.

  2. “The four paths for implementing learning analytics and enhancing the quality of learning and teaching”;

    A USQ research seminar that is part a warm up of the Moot presentation, but also an early attempt to extend the 4 paths idea beyond learning analytics and into broader institutional attempts to improve learning and teaching.

Eventually the slides and other resources from the presentations will show up here. What follows is the abstract for the second talk.

Slides for the MootAU15 presentation

Only 15 minutes for this talk. Tried to distill the key messages. Thanks to @catspyjamasnz the talk was captured on Periscope

Slides for the USQ talk

Had the luxury of an hour for this talk. Perhaps to verbose.

Abstract

Baskerville and Myers (2009) define a management fashion as “a relatively transitory belief that a certain management technique leads rational management progress” (p. 647). Maddux and Cummings (2004) observe that “education has always been particularly susceptible to short-lived, fashionable movements that come suddenly into vogue, generate brief but intense enthusiasm and optimism, and fall quickly into disrepute and abandonment” (p. 511). Over recent years learning analytics has been looming as one of the more prominent fashionable movements in educational technology. Illustrated by the apparent engagement of every institution and vendor in some project badged with the label learning analytics. If these organisations hope to successfully harness learning analytics to address the challenges facing higher education, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.

Building on an earlier paper (Beer, Tickner, & Jones, 2014) this session will provide a conceptual framework to aid in moving learning analytics projects beyond mere fashion. The session will identify, characterize, and explain the importance of four possible paths for learning analytics: “do it to” teachers; “do it for” teachers; “do it with” teachers; and, teachers “DIY”. Each path will be illustrated with concrete examples of learning analytics projects from a number of universities. Each of these example projects will be analysed using the IRAC framework (Jones, Beer, & Clark, 2013) and other lenses. That analysis will be used to identify the relative strengths, weaknesses, and requirements of each of the four paths. The analysis will also be used to derive implications for the decision-makers, developers, instructional designers, teachers, and other stakeholders involved in both learning analytics, and learning and teaching.

It will be argued that learning analytics projects that follow only one of the four paths are those most likely to be doomed to mere fashion. It will argue that moving a learning analytics project beyond mere fashion will require a much greater focus on the “do it with” and “DIY” paths. An observation that is particularly troubling when almost all organizational learning analytics projects appear focused primarily on either the “do it to” or “do it for” paths.

Lastly, the possibility of connections between this argument and the broader problem of enhancing the quality of learning and teaching will be explored. Which paths are used by institutional attempts to improve learning and teaching? Do the paths used by institutions inherently limit the amount and types of improvements that are possible? What implications might this have for both research and practice?

References

Baskerville, R. L., & Myers, M. D. (2009). Fashion waves in information systems research and practice. MIS Quarterly, 33(4), 647–662.

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242–250).

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12(4), 511–533.

Powered by WordPress & Theme by Anders Norén

css.php