Assembling the heterogeneous elements for (digital) learning

Month: January 2011

bim2 – Registering a new student feed

Apart from the many todos, the last post covering bim2 development left off at the task of registering a new student feed. Summarising/recording the development of bim2 to complete that task is the purpose of this post.

Finally getting back into bim2 development (30 Jan), this post dormant for weeks.

What has to happen

The process for bim, which I’d like to re-create in bim2, goes something like this

  1. Student submits the URL for a blog or a feed;
  2. Display error messages and advice if the URL is not a URL, can’t be retrieved, or a valid ATOM/RSS feed can’t be obtained from the URL.
  3. For a valid URL, retrieve the feed and cache it
  4. Compare the posts in the feed with the set questions for the activity.
  5. Update the bim_marking table with any posts that match.
  6. Update the bim_student_feeds table with the details of the feed.

In bim this was done with a bim specific version of Simplepie. In Moodle 2, simplepie is included and Moodle 2 also has a wrapper around simplepie that is used by the Moodle external blogging functionality. bim2 should use this wrapper as much as possible.

How does the Moodle 2 simplepie wrapper work?

It’s located in ~/lib/simplepie/ and implemented as a class called moodle_simplepie. Methods include

  • constructor that takes a feedurl
  • get_cache_directory and reset_cache
    By default this is a central cache, wonder if bim should reset this to a course specific/bim specific cache?

There’s another class moodle_simplepie_file, am wondering if this is the one to actually use, it knows about Moodle’s version of curl. Its methods include

  • constructor;
    Takes a url, timeout value, redirects, headers, useragen…

I wasn’t aware of it, but simplepie does have a class (or two), which the above two extend.

Some sample code from Moodle and its external blog feature follows.

[code language=”php”]
$rssfile = new moodle_simplepie_file($data[‘url’]);
$filetest = new SimplePie_Locator($rssfile);

if (!$filetest->is_feed($rssfile)) {
$errors[‘url’] = get_string(‘feedisinvalid’, ‘blog’);
} else {
$rss = new moodle_simplepie($data[‘url’]);
if (!$rss->init()) {
$errors[‘url’] = get_string(’emptyrssfeed’, ‘blog’);
}
}
[/code]

SimplePie_Locator is being used to test if there is a valid feed. It appears that moodle_simplepie_file might do some auto-detection. Should check that.

No, it doesn’t. Assumes that the url is for the rss feed rather than using simplepie’s autodetect.

Now, if I use moodle_simplepie, instead of moodle_simplepie_file, there is the possibility of getting a feed. However, it seems to be getting the wrong one. In my testing I am using this blog as the test, and instead of the posts feed, moodle_simplpie is returning the comments feed.

Does this happen if I use simplepie directly? No, if I use the version of simplepie included with Moodle 2 correctly, I can auto-detect.

[code language=”php”]

$url = ‘http://davidtjones.wordpress.com’;

//$rssfile = new moodle_simplepie($url);
$rssfile = new SimplePie();
$rssfile->set_feed_url( $url );
$rssfile->init();
print "<h1> feed is " . $rssfile->subscribe_url() . "</h1>";
[/code]

Gives the appropriate output feed is https://djon.es/blog/feed/ (though it also gives a couple of warnings about the cache. Change it to
[code language=”language=”]

$url = ‘http://davidtjones.wordpress.com’;

$rssfile = new moodle_simplepie($url);
print "<h1> feed is " . $rssfile->subscribe_url() . "</h1>";
[/code]

And I’m getting feed is https://djon.es/blog/feed/. This isn’t right.

Okay, getting an error at the moment with the operation timing out, the joys of a slow network connection. And that’s the problem…… Still a problem with moodle_simplepie_file, nothing really explains the difference

Re-starting

Okay, a few weeks have gone by while I finish the thesis etc. Time to get back into it again. During the break, I did hear via a tweet that the RSS client block does do auto-discovery. So that gives another place to look for example code.

This looks like is
[code language=”php”]
$rss = new moodle_simplepie();
// set timeout for longer than normal to try and grab the feed
$rss->set_timeout(10);
$rss->set_feed_url($data[‘url’]);
$rss->set_autodiscovery_cache_duration(0);
$rss->set_autodiscovery_level(SIMPLEPIE_LOCATOR_NONE);
$rss->init();
[/code]

Actually, that’s the wrong stuff. Instead of _NONE it should be _ALL and that works.

Gotta love it when a plan comes together. Now to remove the debug stuff I stuck in the Moodle simplepie code.

Using this in bim2

Now to figure out how this should work within bim2. It’s been a while since I’ve been looking at this code, this should prove an interesting test. Ahh, surprisingly painless. Started work on a new_student_feed class.

Okay, that’s working. The feed is being found and bim2 is able to manipulate the feed using essentially the same simplepie functions as bim was able to.

This means I can start a new post aimed solely at the bim2 aspect.

My god, is it done?

After more years than I care to count, almost as many structures, and many, many more plans and timetables, the thesis is just about done. I have just finished stuffing around with Word and have produced a single PDF that will almost certainly be the version that is submitted.

All that is left is to figure out how to submit from a distance and sit back and wait for the judgement of the examiners.

Not entirely certain what to make of this milestone, it just seems to be yet another step in the on-going denouement of the thesis and all its intricacies. There is a sense of palpable relief in having reached this stage. There remains, however, an on-going niggle of uncertainty about whether or not Word (or my own carelessness) has inserted some enormous blunder in the middle of the thesis. There’s the small bit of fear that one of the examiners will turn out to be a mongrel. But mostly there is relief and a need to go have a good lie down.

There is also some recognition that I should knuckle down and publish from the thesis and its contents. There are at least two good journal articles waiting to be written. Two good journal articles that are likely going to have to continue waiting. Mostly because there is also significant anticipation arising from all the (non-academic) activities that now become possible as the Sword of Damocles that was the PhD has been removed.

First step, a couple of days with the better half in Melbourne next week. With the sole intent of eating, drinking and being merry.

Problems of service provision and why can't I have a personalised class timetable?

The next step in my journey as a full-time Uni student happened today when I saw a notice announcing the draft class timetable. As a result I offer some commentary on the problems of “service” provision as the metaphor for many modern universities/organisations.

A personalised timetable

As with most universities, the one I’m studying at has spent a significant amount of money on a ERP. In this case PeopleSoft. This is the system that knows what courses I’m enrolled in and is used to manage just about every administrative aspect of what I do (pay money, exam timetable etc.). But, at least at the institution I’m enrolled in, it can’t generate a class timetable. Well, there’s something called a “class schedule” but it’s empty.

I remember back in the late 80s as an undergraduate trawling through the UQ handbook manually trying to find the times and locations for the classes for the courses I was enrolled in. On the plus side, UQ had the timetable set in stone with sufficient lead time to produce a printed handbook. The current institution has a draft timetable available a few weeks before the start of term.

Given the wonderfully expensive ERP systems it would make sense to me that I could login some system with my student number and the information system could find out what courses I’m enrolled in, compare that with the timetable information and generate me a personal timetable. Such a system would be easy to use, save all the students time and probably reduce instances of human error.

What I have to do

Instead of a personalised timetable system, I have to

  • Visit the institutional timetabling site;
  • Pick the right link for the 2011 draft timetable;
  • Pick the link for my campus;
  • Pick the link for the faculty to which the program I’m studying belongs to (assuming that I’m aware of this information); and
    Of course, I’m in trouble if I’ve enrolled in an elective that is run by the other faculty.
  • Manually search through a web page with timetable information for 145 courses for the four courses I’m enrolled in.
    This assumes that I know the course codes for the courses I’m enrolled in. Now I have to go look those up. Oh dear, all face-to-face sessions are on Mondays. 9 to 6 with an hours break for lunch, that should be fun.

Moving backwards

The funny thing is that the institution had a personalised timetable system around late 1999/early 2000. I should know, I helped write it and published a description of it in a paper explaining how to thrive with an ERP.

And it’s not as if that system couldn’t still work. The institution is still using the broader system that personalised timetable was a part of for other purposes and the personalised timetable system was designed to scrape data from a web page. Just like the one I had to manually trawl through above. It was smart enough to automatically update the timetable as the draft changed.

Why isn’t there one?

I was going to write something new about why the institution doesn’t have a personalised timetable system for students, but have decided to start with the explanation I gave in 2003

  • Mismatch between system owner and users.
    The system owner of the CQ timetabling system is CQU’s student administration division. Their major timetabling role is managing the allocation of space and time. Distributing this information to staff and students is a secondary smaller task of less importance. As a result the choice and use of the supporting information system is driven more by the requirements of the management role than the distribution role.
  • Organisational Silos.
    Contrary to CQU’s “one University” approach there is significant distance between CQU’s commercial and CQ campuses. There is even distance between the two largest commercial campuses, Sydney and Melbourne, and their smaller cousins at Brisbane and the Gold Coast.
  • Organisational Holes.
    There is no central software developer allocated to helping support divisions like student adminstration support and implement systems like timetabling (unless they hire their own). Instead most rely solely on the features of commercial packages that are known for their inability to integrate with other systems..
  • “Bad” technology.
    The software used on the CQ campuses is not designed to integrate with other software and offers limited support for the distribution of timetabling information. The system used at the commercial campuses is based on infrastructure that does not scale well.

Some of the details will have changed, but the categories are about right. Today, I would probably add in

  • Misaligned governance structures.
    The apparently rational and logical governance structure that guides information systems development at the institution is biased toward the senior managers located within the existing organisational structure.

The problems of service provision

Which brings me to the problems of service provision. A big part of the governance structures that have arisen within higher education institutions is based around the idea of service provision. That is, the faculty’s – the large organisational gatherings of academics – are clients of the service divisions (information technology, library, student administration, central L&T etc.). The aim of the service divisions’ is to provide the services requested by clients, and only those services. It’s generally the role of the governance structure to determine this process.

But there are problems, including:

  • The dumb idea;
    This is the situation where the professional within the client organisation knows that the request service is a “dumb idea”. But there job is not to question, there’s is to provide the service.
    For example, the situation where a senior academic leader will require a central L&T organisation to expend vast amounts of time and resources on a capstone course that has a design which requires students to write thousands of words of prose each week. The capstone course is delivered primarily to non-English speaking background students.
  • Gaming the client;
    The dumb idea problem and a range of other factors mean that the service divisions have to start “gaming the client”. One example of this is the under promise and over deliver tactic. ie. when important senior manager asks the job X be completed, the appropriate response is to explain how difficult and expensive job X will be to complete. This generally involves lots of wailing and gnashing of teeth before finally agreeing to try, but reinforcing that it probably can’t be done. At which stage the service provider assigns one of the junior programmers the 10 minutes required to complete the task. Once complete, and at an appropriate time and in an appropriate manner the impossible task is revealed as completed.

    There is a reason why experienced service provision divisions employ special “client liaisons” who have significant similarity with used car salesmen.

  • Blame the budget;
    This one generally occurs in the 2nd half of the budget year and involves a great deal of agreement about the value and importance of the requested task before explaining how it could be done, if only we had the resources/money. This is typically followed by the suggestion of getting together to generate a joint proposal for the next budget to ensure that the necessary resources are available. The success of this budget proposal ensures the purchase of The machine that goes ping
  • The next version will do that;
    This is a special, prevalent example of “blame the budget” usually invoked with enterprise information systems. In these situations it is usually considered inappropriate to mention that simply because the next version will have this feature, that doesn’t mean the organisation will ever be able to complete the tasks necessary for the feature to be usable. For example, the “class schedule” feature in the ERP at the start of this post.
  • The thicko client liaison;
    This is where the service provider’s representative in the governance structure (either a client service manager or the head of the service provider, depending on how many other senior institutional managers are in the room) says “yea” or “nay” to some request without realising that the request is either utterly impossible (always when they said “yea, we can do that”) or is almost trivially simple (always when they said “no, that can’t be done”).

    This is why client liaison folk absolutely hate having the technical expert in the room with clients. They inject too much knowledge into the conversation.

  • Fall between the cracks;
    Since representation in the governance structure is based on the organisational structure and limited to appropriately senior folk, there are significant problems and opportunities that are never seen and fall between the cracks. This is the problem which I think the personalised timetable above suffers from.
  • Everyone is different;
    If a service provider has to deal with x clients. Then every “widget” the service provider puts in place will have x versions. One for each client. One potential example is the case of 2 student handbooks (where 2 equals the number of faculties) I mentioned previously.
  • If all you have is a hammer, everything looks like a nail.
    This is a particular problem when it comes to fads and fashions. For example, if the current fad is e-portfolios or iPads, then every time a client asks the central L&T division for advice, there’s a chance the answer will be “e-portfolios” or “iPads”. In the case of IT divisions the answer will almost always be one of: ERP, CRM, LMS, Data warehouse, student portal, staff portal etc.

What are the problems I’ve missed?

Conclusions

The above argument is not to suggest that “service” divisions shouldn’t aim to perform the tasks required by their clients. I lived through an organisation where one of the service divisions dictated too much. It’s a worse situation.

But it is to argue that the “client/service” metaphor is far from perfect. It creates a power differential that encourages, often even requires, the weaker party (usually the service provider) to work around the stronger party. It is to argue that the focus should be on developing better metaphors (e.g. teamwork, partnership), while at the same time realising that nothing will ever be perfect and it will require hard work, collaboration and good does of cynicism.

The state of educational data mining in 2009

The following is a summary and some initial reflections on the paper

Baker, S.J.D., Yacef, K. (2009) The State of Educational Data Mining in 2009: A Review and Future Visions: http://www.educationaldatamining.org/JEDM/images/articles/vol1/issue1/JEDMVol1Issue1_BakerYacef.pdf

It’s another reading for the first week of the LAK11 MOOC.

The format I use for these posts is that the overview section is essentially my summary/reflections on the paper. The rest of the sections are my potted summary of each of the sections of the paper.

EDM == Educational Data Mining

Disclaimer I let this post stew unfinished for a couple of weeks while I progressed the thesis. I’m posting it now unfinished. Time to move on with more recent things.

Overview

Gives a good overview/feel for the field. But as a high level description it can’t provide much detail about specific areas, but does provide the references to go digging.

Abstract

  • Methodological profile of early EDM research compared with 2008/2009 research.
  • Trends and shifts include
    • increased emphasis on prediction;
    • emergence of attempts to use existing models to make scientific discoveries
    • reduction in frequency of relationship mining.
  • Examine 2 ways of categorising the diversity in EDM research.
  • Review research problems addressed by the methods
  • Lists and discusses the most cited EDM papers.

Introduction

EDM field is growing, conferences, new journal. Time to review.

What is EDM?

Definition from http://www.educationaldatamining.org/

Educational Data Mining is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.

Suggests EDM is different from data mining (references an in press publication of one of the authors)

due to the need to explicitly account for (and the opportunities to exploit) the multi-level hierarchy and non-independence in educational data.

Which means models drawn from psychometrics is often used in educational data mining.

Now, I don’t know enough to be comfortable that I understand that, which means I should try and following up on that publication.

BAKER, R.S.J.D. in press. Data Mining For Education (a pre-print version). In International Encyclopedia of
Education (3rd edition), B. MCGAW, PETERSON, P., BAKER Ed. Elsevier, Oxford,
UK.

EDM Methods

Drawn from a variety of fields. Two attempts to categorise the methods are introduced, but Baker (in press) is the one it goes with.

Percentages in brackets represent percentage of EDM papers (1995-2005) using the method

  1. Prediction (28%)
    • Classification
    • Regression
    • Density estimation
  2. Clustering (~15%)
  3. Relationship mining (43%)
    • Association rule mining
    • Correlation mining
    • Sequential pattern mining
    • Causal data mining
  4. Distillation of data for human judgement (~18%)
  5. Discovery with methods
    A model is developed through any process that can be validated. It’s then used in analysis or mining.

First 3 are common in data mining.

Distillation of data not widely accepted in data mining, but matches a category in the other categorisation scheme for EDM which suggests it is common in EDM.

“Discovery with methods” most unusual from a data mining perspective.

Relationship mining most prominent in EDM.

Key applications of EDM methods

EDM research come from various fields: individual learning from software, CSCL, computer adaptive testing, student failure/retention.

Key areas

  • Student models;
    Improvement of student models a key application. Models represent student characteristics. Knowing differences enables different responses which suggests improving student learning. Some enable use in real-time. Applications include (all with references): are students gaming the system; experiencing poor self-efficacy; off-task; bored.

    In terms of student failure gives three references.
  • Domain knowledge models;
    Psychometric modeling frameworks + space-searching algorithms used to develop automated approaches from data.
  • Studying pedagogical support;
    i.e. which are most effective in which situations for which students.
  • Looking for empirical evidence to refine/extend educational theories/phenomena;
    e.g. Perera et al (2009) use Big 5 theory for teamwork to search for successful patterns of interaction in student teams.

Important trends in EDM research

Prominent papers from early years

Based on Google scholar citations, look at most prominent papers

  • (1st) Zaiane (2001) was one to propose and evangelise around EDM.
  • (2nd) Zaiane (2002) – a proposal – and (4th) Tang and McCalla (2005) – an instantiation – examine how EDM methods can help develop sensitive/effective e-learning systems.
  • (3rd) Baker, Corbett and Koedinger (2004) case study of EDM methods to open new research areas. e.g. scientific study of gaming the system.
  • (5th) Merceron and Yacef (2003) and (6th) Romereo et al (2003) present tools to support EDM.
  • (7th) Beck and Woolf (2000) use EDM prediction methods to develop student models.

Shift in paper topics over the years

The demise of ALTC and why I'm not sad

So the Australian Learning and Teaching Council has got the chop. There are a lot of folk upset about this and I can understand why. A lot of people invested a lot of time and energy into ALTC activities, some/many of which had good outcomes.

My problem is that I just can’t get to excited about it. I’m not convinced that ALTC had, or could ever have hoped to have had, a significant impact on the higher education sector within Australia. This is an attempt to document some of my reasons for this.

The champion of teaching and learning?

In the ALTC response to its closure, Dr Carol Nicoll (the CEO of the ALTC) is quoted as saying

We are the champion of teaching and learning in the higher education sector.

I think that’s a pretty good summary of my main concern about ALTC’s ability to have an impact.

The quote seems to suggest that without the ALTC, there won’t be a champion of teaching and learning within higher education. Even though teaching and learning is the primary product (in terms of income) to higher education institutions. It suggests that perhaps the higher education institutions aren’t exactly committing whole-heartedly to the task of acting as champions of teaching and learning. To such an extent that there is a need for an external body to take on the role.

The fact that there is a perceived need for an external body to take on this task is the reason why I believe that such an external body can never do have significant, wide-ranging impact. This is different from saying that it can’t do good things, it can and has. It is to suggest that it probably can’t have impact on the fundamental practice of teaching and learning within Australian universities.

Why aren’t universities champions of teaching and learning?

The assumption that universities should be the champions of good teaching and learning and teaching is based on the assumption that good teaching and learning is of primary importance to the university and it members. It is my argument that this is simply not the case. I argue that the institution and its members have very different priorities which mean good teaching and learning will never be of primary importance.

Making money is paramount for the university

Steven Schwartz has a recent article in the Times Higher Education supplement that touches somewhat on this purpose. Over recent years the amount of funding from the Australian government has drastically reduced and universities have rushed to identify alternate forms of funding. If they don’t they are in trouble. Next year will see the introduction of demand driven funding which means

Institutions would receive government funding only if they attracted students.

Update: Another article by Steven Schwartz makes this point more strongly. This quote in particular

While many factors contributed to today’s problems, a key one is that educational providers sought international students as way to bolster their bottom line. They forgot their core mission to educate individuals; instead they saw them as dollar signs.

I’ve seen first hand the compromises that happen when what is needed for good quality teaching and learning bangs up against commercial imperatives. What is known about good teaching and learning is sacrificed at the altar of commercial concerns. Here’s a related example from Tutty et al (2008)

The solution to the high failure rate was to change the assessment to satisfy the institutional requirements of satisfied students and reasonable pass rates rather than explore an alternative learning and teaching approach – an effective solution in the current higher education environment that encourages the academic to prioritise other areas, such as research…………current institutional policies, including teaching and learning quality measures and lack of resources, are compromising the way subjects are delivered. In some cases academics are discouraged from improving their teaching practice

Research is paramount for many academics

This almost goes without saying. An excerpt from my thesis, or at least from one of the drafts

Academic interest and focus on teaching is further impacted by exposure to ambiguous, even contradictory, role expectations. Academics are expected to engage equally in research and teaching and yet work towards promotion criteria that primarily value achievements in research (Zellweger 2005). There is no question that funded research and publication of results in scholarly journals is the dominant criteria in universities world-wide and this is, at least a contributing, if not causal factor in this limitations of university learning and teaching (Knapper 2003). While a review of promotion criteria and weightings from UK universities found widespread adoption of formal parity between teaching and research for mid-range academics, it found that promotion to senior ranks were based almost exclusively on research excellence and did not allow applications based on teaching activities (Parker 2008). Fairweather (2005) found that spending more time teaching in the classroom remains a negative influence on academic pay and that the trend is worsening most rapidly in institutions whose central missions focuses on teaching.

I’ve had numerous interactions with senior academics at really good universities. Their consistent message is that you do a good enough job at teaching and get on with the research.

Getting the next contract is paramount for many other academics

According to one report

At 67,000, casual academics vastly outnumber other academic staff at Australia’s universities, accounting for 60 per cent of the total.

Casual academics aren’t in a position to be undertaking the sort of reflective and time-consuming process required to improve teaching and learning. They are focused on doing a complex job for which they are being underpaid. At the same time they are probably trying to make sure that they get another contract. And you don’t get another contract by pointing out the flaws in the quality of teaching and learning at the institution employing you. You certainly don’t do it by pointing out these flaws to the professor who is most likely your supervisor and is mostly focused on research (see previous section).

Getting the next contract is paramount for senior managers

Almost without exception all senior leaders in Australian universities are on 5 year contracts. Their focus is generally on being seen to achieve short-term wins with high levels of visibility that encourage and enable them to get the next 5 year contract. Hopefully one that entails a promotion either internally or to a better university. I am sure that many of these folk try their hardest to make a difference, however, the reality is there will always be some level of focusing on demonstrable wins. A symptom of this is the comment I heard often, “We have to focus our limited time/resources on the problems we can do something about.”

Such a system is not conducive to engaging in the hard yards to address systematic issues limiting the quality of teaching and learning. Generally it entails organisational restructures (how many times has the L&T support unit at your university been restructured in the last 10 years?), restructuring of degrees/programs or the development of new sexy programs (e.g. at the moment it would probably be “allied health” programs).

Limits of an external body

Since the primary aims of most institutions isn’t exactly aligned with good teaching and learning the efforts of an external body like ALTC are always going to be limited. They will be limited to people who are inherently interested in learning and teaching. This usually means managers or staff who are employed in roles associated specifically with teaching and learning. It also includes the small percentage of academics interested in L&T.

As argued by Moore (2002) and Geoghegan (1994) these people are not the same as – what I think is the majority of folk at a university – those who aren’t directly interested. This difference creates a chasm which inherently limits the ability to spread the ideas generated by the interested folk to the uninterested folk. Even if adoption is apparently achieved I would propose that in many situations the best that can be hoped for is task corruption.

This reminds me of a recent comment of a colleague

when I studied the Instructional Design course at U Manitoba, my professor said that there is no point pushing a constructivist learning design onto a subject matter expert who has been teaching for 30 years with a behaviourist approach.

The significant difference between those interested in ALTC-like activities and those not, is such that it often becomes like an attempt to push a constructivist learning design on a behaviourist. i.e. destined to less than entirely successful.

For these and other reasons I don’t think an external body like ALTC can ever make a significant difference, unless the the context of Australian higher education is changed, good teaching and learning will remain less than a priority. Oh sure, the espoused position of institutions, their strategic plans and the public statements of senior managers will talk the good talk. But if you look really closely, they won’t be walking the walk.

This is the experience described in the Tutty et al (2008) quote above and an experience I’ve seen first (and second) hand again and again. When faced with a problem that arises because of a problem in teaching and learning (e.g. low failure rates due to significant change in student cohort) it is not the “good” teaching and learning solution that is adopted (e.g. radically change the pedagogy to one better suited to the changed cohort), but rather the simpler, more pragmatic solution (e.g. re-write the exam, or perhaps re-mark the exam).

ALTC did some good work. But there were also some real problems, but that will happen in anything this size dealing with something as complex as university teaching and learning. My problem is that I don’t think it would ever have been an effective champion of teaching and learning within higher education. At least not in terms of changing the priorities associated with teaching and learning for a majority of institutions or their staff.

It could be argued that something like ATLC may be better than nothing, but then perhaps rather than expend effort trying to save something like ALTC, it might be better to expend that effort changing the fundamental problems.

References

Fairweather, J. (2005). Beyond the rhetoric: Trends in the relative value of teaching and research in faculty salaries. Journal of Higher Education, 76(4), 401-422.

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

Knapper, C. (2003). Three decades of educational development. International Journal for Academic Development, 8(1-2), 5-9.

Moore, G. A. (2002). Crossing the Chasm (Revised ed.). New York: Harper Collins.

Parker, J. (2008). Comparing research and teaching in university promotion criteria. Higher Education Quarterly, 62(3), 237-251.

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171-185.

Zellweger, F. (2005). Strategic Management of Educational Technology: The Importance of Leadership and Management. Paper presented at the 27th Annual EAIR Forum.

Analytics, semantic web and cognitive science

I’m currently reading a draft of my wife’s PhD thesis. The thesis uses metaphor to examine the concepts that underpin research within the Information Systems discipline. It finds that research within the discipline appears to have a very heavy emphasis on techno-rational type conceptions of organisations, individuals and artifacts. There are various connections between this work and that of learning analytics and some of the assumptions behind the semantic web. This is an initial attempt to make some of these connections. Given limited time (I have to get back to commenting on the thesis), this has become more a place-holder of thoughts and ideas I need to explore more fully.

This post was prompted by this quote by Merlin Donald that is included in the thesis (emphasis added)

It is far more useful to view computational science as part of the problem, rather than the solution. The problem is understanding how humans can have invented explicit, algorithmically driven machines when our brains do not operate this way. The solution, if it ever comes, will be found by looking inside ourselves.

This captures some of my concerns when I start hearing computer scientists talk about intelligent tutors, the semantic web and other “big” applications of artificial intelligence. I don’t doubt the usefulness of these techniques in their appropriate place, however, I think it increasingly unlikely that they can effectively replace/mirror/simulate a human being outside of those limited places.

Another interesting quote from Merlin Donald’s home page

His central thesis is that human beings have evolved a completely novel cognitive strategy: brain-culture symbiosis. As a consequence, the human brain cannot realize its design potential unless it is immersed in a distributed communication network, that is, a culture, during its development. The human brain is, quite literally, specifically adapted for functioning in a complex symbolic culture.

Sounds like there are some interesting potential connections with connectivism and distributed cognition. A connection which – after a very quick skim – this paper (Donald, 2007) seems to make.

The first Donald quote mentioned above comes from the book The way we think: Conceptual blending and the mind’s hidden complexities (Fauconnier & Turner, 2003). A book that argues that conceptual blending is at the core of human thinking, or at least what makes us distinctive.

Lot’s more to read and ponder. For now, some questions

  • Is there a fit here with connectivism and/or distributed cognition (or similar)?
  • What implications do these ideas have for analytics and how it can make a difference?
  • What critiques are there of these ideas?

References

Donald, M. The slow process: A hypothetical cognitive adaptation for distributed cognitive networks. Journal of Physiology (Paris), 2007, 101:214-222.

Fauconnier, G. & Turner M. (2003). The way we think: Conceptual blending and the mind’s hidden complexities. New York, NY: Basic Books

The power of organisational structure

I find myself in an interesting transitionary period in learning. I’m in the final stages of my part-time PhD study, just waiting for the copy editor to check the last two chapters and then its submission time. I’m participating – participation that has been negatively impacted recently by the desire to get the thesis finalised – in a MOOC, LAK11 and looking at returning to full-time study as a high school teacher in training. It is from within this context that the following arises.

Yesterday I read a reflection on week 2 of LAK11 Hans de Zwart in which he quotes from a MIT Sloan Management review article on Big Data and analytics. The quote

The adoption barriers that organizations face most are managerial and cultural rather than related to data and technology. The leading obstacle to wide-spread analytics adoption is lack of understanding of how to use analytics to improve the business, according to almost four of 10 respondents.

This doesn’t come as a great surprise. After all, I think the biggest problems for universities when approaching many new technologies is grappling with the fact that most new technologies have biases that challenge the managerial and cultural assumptions upon which the institution operates. Being aware of and responding effectively to those challenges is what most institutions and those in power do really badly.

One contributing factor to this is that organisations and those in power work on assumptions that seek to maintain and reinforce their importance. Let’s use my experience as a starting university student as an example. As a new student at the university I am receiving all sorts of messages designed to help me make the transition back to study. Do you want to know what strikes me most about these messages and the transition assistance being provided?

That the organisation and communication of these help/transition resources correspond more to the structure of the organisation than to what might actually be useful to a new student. Some examples.

The “we’re here to help” message is a list of the different organisational units, which perhaps is not that surprising. But how about the “guide for students”.

Structure of a university guide for students

How would you expect a University guide for new students to be structured?

  1. By program?
    i.e. I’m enrolled in a Graduate Diploma in Learning and Teaching, a guide for those students?
  2. By discipline?
    i.e The GDL&T is within the education discipline, a guide for those students?
  3. By organisational unit?
    This university divides academic staff into schools and then schools into faculties (e.g. the Faculty of Arts, Business, Informatics and Education)
  4. One for the whole university?

Which would make the most sense? The more specific the guide, probably the more useful. But that might require more work (each program having its own guide) and lead to some fragmentation within the institution.

One of the whole university would reduce the workload and increase the commonality between students, however, it would fail to capture the diversity inherent in disciplines. I’m pretty sure that as a graduate education student, I’ll probably need to know things that are a bit different than an undergraduate engineer.

At this institution it is by organisational unit, by faculty. The institution only has two faculties. So there are two guides.

Content of the university guide

So, if the student guide is divided by faculty, then it must contain faculty specific information. Otherwise, why would there be a division.

The first really specific information mentioned was on page 12 of 19 when it mentioned residential schools for GDL&T students. However, some in the sciences and engineering do residential schools as well. On page 18 of 19 there is mention that Law students need to use a special referencing style. Apart from that there is no information that wasn’t generic to all students. Much check what’s in the other student guide.

Oh, this one starts differently. It has a letter from the Dean of the Faculty. Of course, it was only a couple of months into 2010 (by the way, both guides are still the 2010 guides, 2011 guides haven’t been uploaded yet even though a global “have you read the guides” message has been sent to all students) and the (acting) Dean had moved onto another role.

Another difference, this one mentions clothing and safety within laboratories and on field work. A lot more mention of RPL in this guide. Ahh specific information for engineering students. Must be a great help to all those non-engineering students in the faculty. And this one has screen shots of how students are to get assignment cover sheets, rather than the paragraph of text in the other guide.

So it does contain some different stuff, but still mostly institution level information and information that is already available in other forms elsewhere.

Why have these two guides?

In short, my answer would be, that the management of the two faculties have to do something. There doesn’t appear to be any other explanation why the student guides would be provided at this level. Not to mention that given they simply repeat information that is given elsewhere (and have yet to be updated for 2011) there’s probably no need for them. But it is something that has been done in the past, so it must be done now.

Organisational and cultural influences and problems for learning analytics

For me, this is an example of how organisational and cultural influences impact upon the effective delivery of learning and teaching within universities. Much of what is done, and why it is done, says more about the existing cultures, structures and agendas within the management of the institution than it does about what is best for learning and teaching.

And it won’t be any different for learning analytics. In many universities, the questions that will be asked of analytics will be those deemed important by management. It will be difficult for the questions asked to be designed to cater for the diversity of needs at the levels of discipline, program, teacher or student.

Which is why I’m worried when the Sloan article recommends this solution

Instead, organizations should start in what might seem like the middle of the process, implementing analytics by first defining the insights and questions needed to meet the big business objective and then identifying those pieces of data needed for answers.

The insights and questions that are defined are more likely to say something about the organisational and cultural influences of the host institution, than about what is best for learning and teaching.

The difference between utopian and dystopian visions

As part of the LAK11 course Howard Johnson has commented on an earlier post of mine. This post is a place holder for a really nice quote from Howard’s post, an example from recent media reports, and perhaps a bit of a reflection on responses to analytics.

The quote, some reasons and an example

I like this quote because it summarises what I see as the most common problem with the institutions I’ve been associated with. Especially in recent years as there’s been a much stronger move toward the adoption of more techno-rational approaches to management.

A utopian leaning vision can only be achieved with hard work and much effort, but a dystopian vision can be achieved with only minimal effort.

Improving learning and teaching within a modern university context is a complex task. There is no one right solution, there is no simple solution, no silver bullet. Improving learning and teaching is really hard work.

The trouble is that short-term contracts for senior management (which at some institutions now reach down to what were essentially head of school roles) and other characteristics of the organisational context mean that it is simply not possible for that really hard work to be undertaken. The organisational characteristics of Australian universities is increasingly biased towards a focus on the easy route. Something that can be implemented quickly, appear to return good results and enable a senior manager to boast about it when attempting to renew his/her contract and/or apply for a better job at a better institution.

Based on this argument, when I read this article (via @clairebroooks) and especially this quote from the article

Poor and disadvantaged students were clear winners, with university offers to students from low socio-economic backgrounds increasing by 8 per cent, following the higher participation targets set by the federal government after the 2008 Bradley review of higher education.

I find it very hard to believe that all of these institutions have adopted a utopian vision that has seen their learning and teaching practices, policies, resourcing and systems appropriately updated to respond to the very different needs and backgrounds of these students. Including the necessary re-visiting of the curriculum and learning designs used in their large introductory courses. The courses these students are going to be facing first and which traditionally, at most institutions in most disciplines, have significant failure rates already.

Instead, I see it much more likely that they’ve simply changed who they’ve accepted. At best, they may have thrown some additional resources (an extra warm body or two) to some central support division that is responsible for helping these students. These folk may even have had a couple of meetings with staff who teach those first year courses.

This is not to suggest there aren’t some brilliant folk doing fantastics work in both the central divisions responsible for the bridging and orientation of these students, or in the teaching of large first year courses. It is to suggest that this work is often/usually in spite of the organisational vision, not because of it. It is also to suggest that the existence of such work is almost certainly not repeatable or sustainable. My guess is you could go to any institution boasting how well it is serving these students and by selectively removing a handful of people cause the edifice of good practice to fall apart. The institutional systems wouldn’t be able to continue the good practice in the absence of those key folk.

The utopian vision professed by these institutions will be the result of the hard work of a few who have generally had to battle against the institutional vision and context.

One utopian vision for learning analytics

As Howard suggests much of the discussion of analytics has focused on the dystopian vision. It’s a vision I see as most the likely outcome. At least in the current institutional context.

But at the same time, I do believe that some applications of analytics can help improve the learning and teaching experience of students and staff. It’s important to be aware of and keep highlighting the dystopian vision, but it’s also important – and perhaps past time – to develop and move towards a utopian vision. Or at least to learn from trying. The following is an attempt at the early formulation of one of these. This particular vision connects with some of what I’ve been trying to do. The following does assume an institutional context for learning – that’s what I’m familiar with – am not sure how much of it would be useful for outside an institutional learning context.

Having just listened to John Fitz’s presentation via the lak11 podcast I’d like to pick up notion he mentioned of the self-regulated learner and the idea that analytics can provide useful assistance to that learner. A brief and incomplete summary of an aspect of John’s point would be that there is value in providing the learner with the information provided by analytics in order to enable the learner to make their own decisions.

I would like, however, to expand that to idea to the notion of the self-regulated teacher and the potential benefits that analytics can provide them. From my perspective there are at least three broad types of learner involved in any institutional learning context. They are:

  1. The formal student learner enrolled in a course/program.
    These folk are primarily interested in learning the “content” associated with the course.
  2. The formal teacher learner charged with running the course/program.
    These folk are/should be primarily interested in learning how they can improve the learning experience of the student.
  3. The institutional learner within which the course/program is offered.
    These “folk” are/should be primarily interested in learning how to improve the learning experience of the students and teachers within the institution. Similar to Biggs’ (2001) quality feasibility ideas. Though they are more often primarily interested in defining the learning experience, rather than engaging with and improving existing practices.

At this stage, I’m interested in how analytics can be used to help learner types 1 and 2. I’m keen on changing the learning/teaching environment for these learners in ways that help them improve their own practice (what I see as the task for learner type 3 and the task they aren’t doing). For right or wrong, for most of the higher education institutions I’m associated with the learning environment means the LMS. At least in terms of the contributions I might be able to make.

My small-scale utopian vision is the modification of the LMS environment to effectively bake in analytics informed services and modifications that can help student and teacher learners become more aware of possibly relevant improvements to their practice. Some examples include:

However, I don’t think these examples go far enough. There’s something missing. Additional thought needs to be given to the insights from the behaviour change literature which suggests that simply knowing about something isn’t sufficient to encourage change in behaviour.

This comes to the idea of scaffolding conglomerations. One idea for such a conglomeration might be to:

  • Embed SNAPP into an LMS (e.g. Moodle).
    At the moment, SNAPP is a browser based tool so it can only generate visualisations based on data in courses that the user has access to. For most people in most LMS this means you are limited by the inherent course division fundamental to LMS design. You can’t see and act upon the social networks evident in other courses.
  • Build around SNAPP some responses based on common patterns.
    One example might be a “Prompt all isolated students” feature that would present the academic with a template email (designed based on insights from theory or experience) that can be sent automatically to all discussion forum participants that aren’t connected to others. It might automatically include some statistics showing success rates between students that are isolated and those that are connected.
  • Enable user-contribution of common responses.
    Enable staff to add their own pattern response sequences.
  • Link SNAPP data with other Moodle and institutional data.
    Allow staff and students to see additional anonymised information with the SNAPP visualisations. e.g. shade red all those students who exhibit network connections similar to those who have failed the course previously.
  • Provide links to resources about good practice.
    When SNAPP detects a pattern where one person (e.g. the teacher) is the focal point of all interaction within a discusion forum, it provides a link to the literature and instructional design practice that suggests this is wrong and identifies approaches to modify practice.
  • Makes SNAPP data visible to other teachers within a cohort.
    All teachers within the psychology courses can see the network visualisations in each others courses. Thereby making visible and open for discussion social norms within those courses.

Time to stop worrying about dystopian vision (and also writing about a potential utopian vision) and start doing something. As per the Alan Kay quote

Don’t worry about what anybody else is going to do… The best way to predict the future is to invent it.

Analytics creating too much transparency? A two-edged sword?

Have been listening to a Dave Snowden podcast of a “101 organic KM course”. Amongst many familiar themes is the mention of the pitfalls of too much transparency hurting innovation.

He uses the example of expense accounts to illustrate the point. At one stage he had a large expense account which could be used to fund interesting and unusual approaches around his work. The innovation was possible was there was no itemisation/justification of the expense. Upon moving into a large company there came a requirement for itemisation. That itemisation kills off innovation.

This rings a bell at the moment, because of the current discussion about the problems with learning analytics and in particular George Siemens’ list of concerns.

In the dim dark past of the 90s, when I was an innovative, young university academic no-one took any notice of what I did within the courses I was teaching. I could do a lot of very different things that are documented in my publications from that time. Not all of them worked as I planned, but they all helped something interesting grow.

In part this was possible because of the very problem that often worried me about some of my colleagues. At that time, there were at least 2 or 3 of my fellow academics who were fairly widely known as being really bad educators. Even though one or two claimed to be great teachers, even a cursory glance at their practice and resources or a chat with a range of their students would confirm some really, really bad practice. What annoyed me at the time was that the system allowed their practice to be opaque. As long as they met various deadlines (even though they were often late) and had a reasonable grade distribution there practice was allowed to continue.

What I am only now starting to realise is that if that system wasn’t opaque, if it were too transparent, I probably wouldn’t have undertaken any of the innovative work I did. One explanation why not arises from Siemens’ list of concerns. In a university with analytics baked in and heavily relied upon by management to “manage”

  • The act of providing a quality learning experience has been reduced to a set of numbers and graphs that specify certain activities and tasks. In response to known patterns from analytics I am expected to perform certain tasks, perhaps even push certain buttons at certain times to encourage those patterns to happen again.
  • What is accepted is what is measured and has become the target. Anyone moving away from the established pattern is fighting the inertia of the organisation and its systems. (This was actually one of the problems I faced working within an institution with a history of industrial print-based education in the mid-1990s while attempting to use the Internet).
  • Different interpretations of what is good learning/teaching due to the diversity inherent in the disciplines, concepts, individual students and teachers is lost. You (and the students) are expected to follow the standard patterns that analytics has established as effective. (This is also my problem with the LMS. For some institutions it has become the case that you can do any online learning you want. As long as the functionality is provided within the LMS restrictions of quiz, discussion forum, assignment management etc.)
  • The smart/pragmatic academics and students will have identified what “analytics patterns” are required and figured out the least painful way to provide those requirements.
  • When something like the recent Queensland floods occur it will throw the analytics system into melt-down as the expect patterns won’t be there. For example, the two “late” letters I received in the post today (first post since before Christmas due to the floods) from Video Ezy asking for their DVD back. Regardless of floods cutting off all possibility of me returning it.
  • The “analytics patterns” will drive management to change policy and funding for practices so that only those patterns can be re-created. Anything that falls outside that norm will not be funded. (e.g. this is one of the major, unsolved problems the industrial, print-based distance education university had with online, it kept funding for f-t-f and DE, never figuring out that online could be different).
  • Since the “analytics patterns” have been established and the funding routinised management are able to treat the folk responsible for designing and delivering teaching like building blocks that can be replaced as needed.

And there’s more.

Learning analytics looks like being a two-edged sword.

Creating a podcast for LAK11 presentations

I’m currently participating in the Learning and Knowledge Analytics MOOC being run by George Siemens and others. This post outlines the process I used to create a podcast of the presentations (click on that link if you want to subscribe to the podcast) being given as part of the course.

Why?

The presentations are taking place within Elluminate and Elluminate recordings are made available. So why a podcast? Simply put the asynchronous and audio only nature better matches my preferences and context. So, I’ve repeated a process I use for the PLE/PLN symposium. More details below.

How?

The basic process is

  • Bookmark the mp3 files using del.icio.us using the tag lak11podcast.
  • Pass the RSS for that those tags produced by del.icio.us through feedburner to generate a podcast.
  • Subscribe to the podcast using iTunes or other software.

The one difference between this podcast for LAK11 and the PLE/PLN podcast, is that I couldn’t bookmark the original mp3 files. These files are made available via the LAK11 Moodle course. Attempting to access the files directly results in a redirect to the home page for the SCOPE Moodle instance where you can login as a guest and view the files.

Works fine if you are a person on the web, but podcast software like iTunes isn’t that smart.

The solution I adopted here was to copy the MP3 files out of the Moodle course into a location without a re-direct. In this case drop box. I was a bit reluctant to do this as these aren’t my files, however, I’m assuming that given the nature of the MOOC that this should be okay. If not, the files will be removed.

Limitations

At the moment, production of the podcast relies on new mp3 files being tagged by me with the tag lak11podcast. Would probably be more responsive if feedburner was set up to use anything anyone tagged with lak11podcast. For now, I’m leaving the restriction simply to save time and let me get on with some more reading. Happy to change it if people ask.

Introducing Hunch

One of the activities for the first week of the lak11 MOOC is to get started with using Hunch and reflect on it as a model for learning.

What is Hunch?

From the Hunch about page it is an application of machine learning to provide recommendations to users about what might be of interest to them on the web. It’s the work of a bunch of self-confessed MIT “nerds”.

Using Hunch

Creating an account on Hunch starts with logging in with either a Facebook or Twitter account. Went with Twitter. Some of the other LAK11 participants have queried the privacy question with this and then answering the questions.

The site now asks a range of questions using a fun (ish) approach using photos, increasing interest somewhat. It also provides feedback on what others have answered.

As others have noted there is a North American cultural bias to the questions.

Interesting, only 4% of respondents said they didn’t have a Facebook account.

After answering a few more than the minimum 20, Hunch presents a selection of recommendations. In this case five recommendations each for magazines, TV shows and books. I’m assuming that the categories of answers were also based on my answers. the recommendations are all good or close matches. All three categories included examples I had read/watched and enjoyed.

So, it appears that Hunch is designed with badges to earn as you use the site more, provide more information. There are other features sought to encourage connections and feedback between users. After all that would appear to be the currency that Hunch needs to generate its recommendations. The more connections, the better the math, the better the recommendations.

And perhaps that is the problem. I don’t feel the need for a site like Hunch to get the recommendations I want. I already have strategies, social networks and information sources that I use. I can’t see myself expending the effort on this sort of site. The question that is how many others might be bothered to provide this information?

That said, it does appear to be working fairly well already.

Reflections

After using Hunch, the LAK11 syllabus asks

What are your reactions? How can this model be used for teaching/learning?

and suggests sharing views in the discussion forum. I’m going to reflect here first and then check the discussion forum. Mainly because the following will be more stream of consciousness dumping than well-considered insight.

The obvious academic question to ask is what is meant by teaching/learning. Most of my experience has been/will be with more formal areas of learning and teaching and thus my reflections are likely to be coloured/biased by that experience.

My first observation (taking the viewpoint of a teacher) would be that any additional information about my students would be useful. Especially if a system like Hunch was able to provide useful recommendations. Such recommendations would be useful to the students as well, but I wonder how much freedom they would have to take up those recommendations within a formal educational setting. It would seem that what freedom does exist, lays with the teaching staff.

Such information in a L&T situation might feel somewhat similar to some of the learning style surveys that are around. Similarly, I wonder how much these type of things would reinforce existing categories/beliefs, rather than offering new paths or opportunities.

Am feeling that I’m somewhat ill-informed about the nature and capabilities of Hunch and thus somewhat ill-informed to reflect on its applicability to learning and teaching. Drawing some conclusions from the little I know means that they are building models based on answers to the questions. Then comparing that with models of the items/recommendations to come up with matches.

I wonder how difficult building these models would be for learning and teaching. It’s my understanding that disciplines such as physics have built fairly complex conceptual models of the domain, in particular for undergraduate studies. But it’s also my belief that the construction of such models was a fairly resource intensive task. Will the resource intensive nature make it difficult to implement a L&T focused Hunch? Then making the connections between other models would seem difficult. Hunch after all hasn’t handled the cross-cultural aspects all that well (probably was designed to retain the North American emphasis) and operates in an area (commercial products and services) in which there has been a lot of research and a lot of commercial interest/resources.

From the perspective of an motivated learner, a L&T flavoured Hunch could be very useful. But what percentage of learners would use such a system? e.g. given my reservations about using the current Hunch. Especially given that Hunch relies somewhat on the contributions the users make to the system. Given the limited percentage of folk that contribute content to social networking sites this is likely to limit a L&T flavoured Hunch even further.

This perhaps sums up my cynical view of the difficulty of effectively and appropriately applying analytics in L&T.

Let’s see if the Moodle discussion forum has more positive contributions.

Applying "learning analytics" to BIM

The following floats/records some initial ideas for connection two of my current projects, BIM and lak11. The ideas arise out of some discussion with my better half who is currently using BIM in one of the courses she is teaching.

Some brief background, BIM is a Moodle module that allows teaching staff to manage and encourage the use of individual student blogs. The blogs are hosted by an external blog provider of the student’s choice, not within Moodle. Typical use is to encourage reflection and make visible student work in order for comments and discussion.

BIM participation as indicator

The discussion started with the observation that by the second or third required blog post it was generally possible to identify the students in a course that would do really good and those that would do really bad. How and when the students provided their blog posts is a good indicator of overall result.

This correlation was first observed with my first use of BAM in 2006 (BIM stands for BAM into Moodle) and some findings of others.

This correlation was not something that was new. We were both able to make the observation that similar sorts of patterns exist with most educational practices. The difference is that the nature of the BIM assignments generally makes this more obvious. The discussion turned to what this pattern actually tells us?

Students with good practices

We ended up agreeing (as much as we ever do) that what this pattern is showing us is not that some students are smart and that some are not. Instead it is showing us that the “really good” students simply have the “really good” study practices. They are the ones reading the material, reflecting upon it and engaging with the assessment requirements. The “really bad” students just never get going for whatever reason. The rest of the students are generally engaging in the work at a surface level.

So, use of BIM is making this pattern more obvious, what should be done about it?

Encouraging connections

The tag line for the lak11 course is

Analyzing what can be connected

A thought that “connects” with me and what I think analytics might be good for. More specifically, my interest in analytics is focused more at the idea of

Using analysis to encourage connections

Which going by the definitions given in one of the early readings is close to what is meant by action analytics.

In the case of BIM, the idea consists of two tasks

  1. Analyse what is going on within BIM to identify patterns; and then
  2. Bring those patterns/analysis to the attention of the folk associated with a course in order to encourage action.

Some ideasOne idea

This leads to some ideas for additional features for bim. None, bits or all of them might get implemented.

Connect students with evidence of good practice

  1. Add a due date to each question a student is meant to respond to within a bim activity.
  2. Allow academic staff to choose (or perhaps create) a warning regimen.
  3. A warning regimen would be specify a list of messages to send to individual students based on the due date and the student’s own contributions to the bim activity. The specification might include
    • Time when to send messages.
      e.g. 1 week, 3 days and on the day.
    • Teacher provided content of the message.
    • Some bim analysis around the activity.
      e.g. it might include the number of students who have already submitted answers to the question, perhaps some summary of connections from previous uses of bim between when posts are submitted and overall performance. Some statistics or data about the posts so far e.g. amount of words, some textual statistics etc.
    • Links to other posts.
      This one could be seen as questionable. Links to other student posts could act as scaffolding for students not really sure what to post. Of course, the “scaffolding” could result in “copying”.

The idea being that being aware of what other students are posting or what is considered good practice would potentially encourage students, or at least make it more likely, that they may consider such practice.

This is very close to the idea behind Michael De Raadt’s progress bar for Moodle.

What “theories” exist?

One of the initial readings identified four main class of components for learning analytics. One of which is theory, which includes the statistical and data mining techniques that can be applied to the data.

I need to spend some time looking at what theories exist that might apply to BIM. e.g. I’m wondering if some of the textual analysis algorithms might provide a good proxy for evaluating the quality of blog posts and whether or not there might be some patterns/correlations with final/overall student results.

Learning analytics: Definitions, processes and potential

The following is the summary of my first reading for the LAK11 MOOC and follows on from my initial thoughts.

I decided to start with the paper title Learning analytics: Definitions, processes and potential as it appeared from the combination of the data published (Jan 2011) and title to give the more current overview. It’s also written by one of the course facilitators, so should have some connection to the course.

Summary

The paper essentially

  • Defines some terms/concepts;
  • Abstracts from some published “analytics processes” a common set of 7 processes/tasks.
  • Identifies four types of resources; and
  • combines them in the following model.

A model for learning analytics

The paper closes with what seems to be the ultimate goal of most of the folk involved with learning analytics – automated, individualised education. I’m not sure that this is a helpful aim. First, because I have my doubts that it can ever be achieved in the real world as opposed to a closed system (i.e. laboratory experiment). Second, because I think that there is a chance that having this as the ultimate aim will result in less focus on, what I think is the more fruitful approach of, working out how analytics can supplement the role of human beings in the teaching process.

Mm, that’s probably got a few assumptions within it that need to be unpicked.

The following is a slightly expanded summary of the paper.

Introduction

It starts with defining learning as “a product of interaction”. With the nature of the interaction being broadly different depending on the assumptions underpinning the learning design.

Regardless, we want to know how well things went. Traditional methods – student evaluation, grade analysis, instructor perceptions – all have limitations and problems.

Question: What are the limitations and problems with learning analytics? There is no silver bullet.

As more learning is computer facilitated, there’s interest in seeing how data accumulated can be used to improve L&T…leading to learning analytics. The application of statistics to rich data sources to identify patterns is already being used in other fields to predict future events.

The paper aims to review literature on analytics and define it, its processes and potential.

Learning analytics and related concepts defined

The cynic in me finds the definition of business intelligence particularly frightening/laughable. I do need to learn to control that.

Term Definition
Learning analytics “emerging field in sophisticated analytic tools are used to improve learning and education”..drawing from other fields of study
Business intelligence established process through which decision makers in the business world integrate strategic thinking with information technology to synthesize vast amounts of data into powerful decision making capabilities
Web analytics using web site usage data to understand how well the site is achieving its goals.
Academic analytics application of the principles and tools of business intelligence to academia
Or more narrowly by other authors, is to examine issues around student success
Action analytics greater emphasis on generating ‘action’, i.e. applying data in a “forward thinking manner”

Does mention the problems faced when implementing these type of strategies with existing institutional arrangements, especially around data/system ownership. Suggests that learning analytics is intended more specifically to address these issues. Especially in terms of providing the data/analysis to students/teachers within the teaching context. Right up to some of the automated/intelligent tutoring type approaches.

Thus, the study and advancement of learning analytics involves: (1) the development of new processes and tools aimed at improving learning and teaching for individual students and instructors, and (2) the integration of these tools and processes into the practice of teaching and learning.

I can live with that. It’s what I’m interested in. Sounds good.

Learning analytics processes

Essentially a collection of four different models/abstractions of how to do this stuff and then a synthesis into a common 7 processes of learning analytics

  1. select
  2. capture
  3. aggregate and report
  4. predict
  5. use
  6. refine
  7. share

Knowledge continuum

This is the DIKW (Data/Information/Knowledge/Wisdom) stuff which some of the KM folk, including Dave Snowden, don’t have a lot of time for. In fact, they argue strongly against it (Fricke ??).

TO DO: There is much of interest in Fricke (2007), I have not read it through and some appears heavy going, I should take the time. An interesting reference/quote is this one

Results from data mining should be treated with skepticism

drawn from some work that and describe more here

The DIKW stuff is connected to learning analytics through some work that suggests things like “Through analysis and synthesis that (sic) information becomes knowledge capable of answering the questions why and how”.

Another to do: Snowden’s thoughts on DIKW and his work suggest another “process” for learning analytics. Should take some time to look at that.

Web analytics objectives

From Hendricks, Plantz and Pritchard (2008), “four objectives essential to the effective use of web analytics in education:

  1. define the goals or objectives;
  2. measure the outputs and outcomes;
  3. use the resulting data to make improvements; and
  4. share the data for the benefit of others.

Five steps of analytics

Campbell and Oblinger (2008)

  1. capture
  2. report
  3. predict
  4. act
  5. refine

Collective application model

Summary of a Dron and Anderson model

Learning analytics tools and resources

Draws on various source to suggest that “learning analytics consists of”

  • Computers;
    Includes an interesting overview of the different bits of technology (and their limitations) that are currently available. Including some references criticising dashboards.
  • People;
    Interestingly, this is the smallest section of the four, but perhaps the most important. In particular, the observation that developing effective interventions remain dependent on people.
  • Theory;
    Points to the various “kernel theories” for analytics and the observation by MacFadyen and Dawson (2010) that there’s little advice which of these work well from a pedagogical perspective.
  • Organisations.
    Importance of the organisation in developing analytics and some of the standard “leadership is important” stuff

A start to the "Introduction to Learning and Knowledge Analytics" MOOC

So, the year of study begins. First up is an attempt to engage in a MOOC (Massive Open Online Course) on Learning and Knowledge Analytics. This first post aims to contain some reflection on the course syllabus and what I hope to get out of the course.

The problem and the promise

As the course description suggests

The growth of data surpasses the ability of organizations or individuals to make sense of it

This is a general observation, but it also applies to learning and teaching related activities.

The promise is that analytics through techniques such as modelling, data mining etc will aid the analysis of this data and help people and organisations to make sense of all the data. To improve their decision making, learning and other tasks.

The aim of the course is as

a conceptual and exploratory introduction to the role of analytics in learning and knowledge development

. It is an introductory course, no heavy math.

My reservations

I’ve dabbled in work that is close to analytics, but have always had some reservations about its promise. One of the aims of engaging in the course is to encourage me to read and reflect more on these reservations. A quick summary/mind dump of those reservations includes:

  • The data is not complete;
    At the moment, the data that is available for analytics is limited. e.g. data from an LMS gives only a very small picture of what learning and learning related activities are going on. Consequently, data driven decision making is overly influenced by the data that is available, rather than the data that is important.
  • Models and abstractions are by nature lossy;
    A lot of analytics is based on mathematical/AI models or abstractions. By definition these “abstract away” details that are deemed to be not important. i.e. information is lost.
  • Not every system is causal, except in retrospect;
    There often feels to be an assumption of (near) causality in some of this work. There are some events/systems/processes which simply aren’t causal. There is no regular, repeating pattern of “a leading to b”. Just because a lead to b this time, doesn’t mean it will next time. Some of this is related to the previous two points, but some of it is also related to the nature of the systems, especially when they are complex adaptive systems. It will be interesting to hear Dave Snowden’s (one of the invited speakers) take on this later in the course as this reservation is directly influenced by his presentations.
  • People aren’t rational;
    Personally, I don’t think most people are rational. This shouldn’t suggest that people aren’t somewhat sensible in making their decisions. One’s decisions always make sense to oneself, but they are almost certainly not the decisions that someone else would have made in the same situation. As part of that, I think our experiences constrain/influence our decision making and actions.

    This generates two concerns about analytics. First, I wonder just how much change in decision outcomes will arise from the folk seeing all the nice, lovely new visualisations produced by analytics. Are people going to make new decisions or simply use the visualisations to justify the same sub-set of decisions that their experiences would have led them to make. Second, how common amongst learners will be the patterns, models and correlations that arise from analytics? Just because the model says I did “A-B-C” does that really imply I was doing it for the same reasons as the other 88% of the population?

  • Is there enough information;
    I believe, at this currently ill-informed stage, that some (much?) of the usefulness of analytics arises from a reliance on big number statistics. i.e. there’s so much data that you get useful correlations, patterns….How many existing institutions are going to have sufficiently big data to usefully use these techniques?
  • The technologists alliance;
    Geohegan suggests there is a technologists’ alliance that has alienated the mainstream through the inability to produce an application of technology that is of absolutely compelling value in pragmatic, mainstream terms that provides the compelling reason to adopt. I think it’s important that there be researchers and innovators pushing the boundaries, but there is too little thought given to the majority and applications of innovations/new technologies/fads that they see as useful. SNAPP is a good start, but there’s some more work to be done.
  • Yet another fad;
    Analytics is showing all the hallmarks of a fad. There will almost certainly be some interesting ideas here, but the combination of the previous reservations will end up it in being misapplied, misunderstood and ultimately have limited widespread impact on practice.

    As evidence of the fad, I offer the photo below that comes from this blog post (which I reference again below).

    heads of data explosion/exploitation

  • Ethical related questions;
    A post from Johnathan MacDonald on “The Fallacy of Data Bubble Ignorance” includes the following quote

    People don’t want to be spied on. It’s an abuse of civil liberty. The fact that people don’t realise they are being spied on, is not justification to do so. Betting on a business model that goes against how society really works, will ultimately end in disaster.

    If this holds, does it hold for analytics. Will the exploitation of learning analytics lead to blow back from the learners?

    For some of the above reasons, I am not confident in the ability for most organisations to engage in the use of analytics in ways not destined to annoy and frustrate learners. Many are struggling to implement existing IT systems, let alone manage something like this. I can see the possibilities of disasters.

  • Teleological implementation.
    This remains my major reservation about all these types of innovations. In the end, they will be applied to institutional contexts through teleological processes. i.e. the change will be done to the institution and its members to achieve some set plan. Implementation will have little contextual sensitivity and thus will have limited quality adoption and will be blind to some of the really interesting context innovations that could have arisen.

A bit of duplication and perhaps some limited logic, but a start.

Onto the week 1 readings.

Thesis acknowledgements version 0.5

What follows is an early attempt at the acknowledgements section of the thesis. My better half, also completing here PhD, queried why this section would be needed? I will be including because there are some people that need to be acknowledged for their contributions.

Acknowledgements

The work described here has been made possible by a huge number of people. A number far too large to acknowledge appropriately within the space allowed. Consequently, I start by offering gratitude to all, before acknowledging a few groups and individuals.

I would like to start with the people who disagreed with the ideas expressed here and embodied in the Webfuse information system. The difficulties you have had with understanding and appreciating these ideas have pushed me further to understand and refine the ideas. On reflection, the fact that so many of you filled management or senior information technology positions within the organisation remains somewhat troubling. But this work would not be without you, thanks.

Perhaps more importantly are the tens of thousands of people who made use of the services provided by Webfuse over its years of service. Thanks for your patience and suggestions. It was the your diversity that drove recognition of how important flexibility was and just how inflexible most IT systems actually are.

Responding to this flexibility is not something I could have done myself. The development of Webfuse owes much to the project students and IT staff who worked on or with Webfuse over its years of existence. There were many of you and you rarely received the recognition due. In no particular order, thank you: Andrew Newman, Andrew Whyte, Matthew Aldous, Arthur Watts, Bret Carter, Chriss Lenz, Adrian Yarrow, Russell Gibbings-Johns, Zhijie Lu, Paul Wilton, David Binney, Chris Richter, Shawn Dollin, Paula Turnbull, Damien Clark, Scott Bytheway, Matthew Walker, Stephen Jeffries and many more I have almost certainly forgotten. Special mention should be made of Derek Jones, the last man standing in terms of Webfuse and a major influence on its development.

Mary Cranston was also amongst the staff working on Webfuse. Her contributions to the support and use of Webfuse were as important and immeasurable as they were generally unrecognised and self-effacing. By far the largest shortcoming of the organisation we worked for was its failure to recognise just how much a contribution Mary made to the organisation. Perhaps only surpassed by its failure to recognise the magnitude of the contribution Mary might have made to the organisation. I cannot thank Mary enough.

Webfuse and the work described here would not have happened without Stewart Marshall. Stewart was the Foundation Dean of the Faculty of Informatics and Communication and, as described in Chapter 5, remains the only senior manager in my experience to not only understand ateleological development but also publicly embrace it as a strategy for the organisation he was responsible. Without Stewart, chapter 5 would never have happened.

From the research perspective, I am deeply indebted to the Very Respectable Professor Gregor. Without Shirley’s knowledge, connections, influence and most especially patience this work would have been much less than it is. Perhaps my largest regret from this thesis is that I was not in a position to do more with Shirley’s contribution. The same might be said about the folk I have co-written with over recent years. I would like to make special mention of Kieren Jamieson as someone who made significant and under utilised contributions to this and related work.

Lastly, I would like to thank my family and ask forgiveness for all the time I spent on Webfuse and this thesis that I should have been spending on you. A special thanks to Sandy for starting her own PhD. Thereby, providing the motivation necessary for me to complete this thesis, before she completed hers.

A command for organisations? Program or be programmed

I’ve just finished the Douglas Rushkoff book Program or be Programmed: Ten commands for a digital age. As the title suggests the author provides ten “commands” for living well with digital technologies. This post arises from the titular and last command examined in the book, Program or be programmed.

Dougls Rushkoff

This particular command was of interest to me for two reasons. First, it suggests that learning to program is important and that more should be doing it. As I’m likely to become a information technology high school teacher there is some significant self-interest in there being a widely accepted importance to learning ot program. Second, and the main connection for this post, is that my experience with and observation of universities is that they are tending “to be programmed”, rather than program. In particular when it comes to e-learning.

This post is some thinking out loud about that experience and the Ruskoff command. In particular, it’s my argument that universities are being programmed by the technology they are using. I’m wondering why? Am hoping this will be my last post on these topics, I think I’ve pushed the barrow for all its worth. Onto new things next.

Program or be programmed

Rushkoff’s (p 128) point is that

Digital technology is programmed. This makes it biased toward those with the capacity to write the code.

This also gives a bit of a taste for the other commands. i.e. that there are inherent biases in digital technology that can be good or bad. To get the best out of the technology there are certain behaviours that seem best suited for encouraging the good, rather than the bad.

One of the negative outcomes of not being able to program, of not being able to take advantage of this bias of digital technology is (p 15)

…instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery.

But is all digital technology programmed?

In terms of software, yes, it is all generally created by people programming. But not all digital technology is programmable. The majority of the time, money and resources being invested by universities (I’ll stick to unis, however, much of what I say may be applicable more broadly to organisations) is in “enterprise” systems. Originally this was in the form of Enterprise Resource Planning system (ERPs) like Peoplesoft. It is broadly recognised that modifications to ERPs are not a good idea, and that instead the ERP should be implemented in “vanilla” form (Robey et al, 2002).

That is, rather than modify the ERP system to respond to the needs of the university. The university should modify its practices to match the operation of the ERP system. This appears to be exactly what Rushkoff warn’s against “we are optimizing humans for machinery”.

This is important for e-learning because, I would argue, the Learning Management System (LMS) is essentially an ERP for learning. And I would suggest that much of what goes on around the implementation and support of an LMS within a university is the optimization of humans for machinery. In some specific instances that I’m aware of, it doesn’t matter whether the LMS is open source or not. Why?

Software remains hard to modify

Glass (2001), describing one of the frequently forgotten fundamental facts about software engineering, suggested that maintenance consumes about 40 to 80 percent of software costs, with 60% of the maintenance cost is due to enhancement. i.e. a significant proportion of the cost of any software system is adding new features to it. You need to remember that this is a general statement. If the software you are talking about is part of a system that operates within a continually changing context, then the figure is going to be much, much higher.

Most software engineering remains focused on creation. On the design and implementation of the software. There hasn’t been enough focus on on-going modification, evolution or co-emergence of the software and local needs.

Take Moodle. It’s an LMS. Good and bad like other LMS. But it’s open source. It is meant to be easy to modify. That’s one of the arguments wheeled out by proponents when institutions are having to select a new LMS. And Moodle and its development processes are fairly flexible. It’s not that hard to add a new activity module to perform some task you want that isn’t supported by the core.

The trouble is that Moodle is currently entering a phase which suggests it suffers much the same problems as most large enterprise software applications. The transition from Moodle 1.x to Moodle 2.0 is highlighting the problems with modification. Some folk are reporting difficulties with the upgrade process, others are deciding to delay the upgrade as some of the third-party modules they use haven’t been converted to Moodle 2. There are even suggestions from some that mirror the “implement vanilla” advice for ERPs.

It appears that “we are optimizing humans for machinery”.

I’m wondering if there is anyone doing research how to make systems like Moodle more readily modifiable for local contexts. At the very least, looking at how/if the version upgrade problem can be improved. But also, the ability to modify the core to better suit local requirements. There are aspects there already. One of the difficulties is that to achieve this you would have to cross boundaries between the original developers, service providers (Moodle partners) and the practices of internal IT divisions.

Not everyone wants to program

One reason this will be hard is that not everyone wants to program. Recently, D’Arcy Norman wrote a post talking about the difference between the geeks and folk like his dad. His dad doesn’t want to bother with this techy stuff, he doesn’t want to “program”.

This sort of problem is made worse if you have an IT division that has senior management with backgrounds in non-IT work. For example, an IT director with a background in facilities management isn’t going to understand that IT is protean, that it can be programmed. Familiar with the relative permanence of physical buildings and infrastructure such a person isn’t going to understand that IT can be changed, that it should be optimized for the human beings using the system.

Organisational structures and processes prevent programming

One of the key arguments in my EDUCAUSE presentation (and my thesis) is that the structures and processes that universities are using to support e-learning are biased away from modification of the system. They are biased towards vanilla implementation.

First, helpdesk provision is treated as a generic task. The folk on the helpdesk are seen as low-level, interchangeable cogs in a machine that provides support for all an organisation’s applications. The responsibility of the helpdesk is to fix known problems quickly. They don’t/can’t become experts in the needs of the users. The systems within which they work don’t encourage, or possibly even allow, the development of deep understanding.

For the more complex software applications there will be an escalation process. If the front-line helpdesk can’t solve the problem it gets handed up to application experts. These are experts in using the application. They are trained and required to help the user figure out how to use the application to achieve their aims. These application experts are expert in optimizing the humans for the machinery. For example, if an academic says they want students to have an individual journal, a Moodle 1.9 application expert will come back with suggestions about how this might be done with the Moodle wiki or some other kludge with some other Moodle tool. If Moodle 1.9 doesn’t provide a direct match, they figure out how to kludge together functionality it does have. The application expert usually can’t suggest using something else.

By this stage, an academic has either given up on the idea, accepted the kludge, gone and done it themselves, or (bravely) decided to escalate the problem further by entering into the application governance process. This is the heavy weight, apparently rational process through which requests for additional functionality are weighed against the needs of the organisation and the available resources. If it’s deemed important enough the new functionality might get scheduled for implementation at some point in the future.

There are many problems with this process

  • Non-users making the decisions;
    Most of the folk involved in the governance process are not front-line users. They are managers, both IT and organisational. They might include a couple of experts – e-learning and technology. And they might include a couple of token end-users/academics. Though these are typically going to be innovators. They are not going to be representative of the majority of users.

    What these people see as important or necessary, is not going to be representative of what the majority of academic staff/users think is important. In fact, these groups can quickly become biased against the users. I attended one such meeting where the first 10/15 minutes was spent complaining about foibles of academic staff.

  • Chinese whispers;
    The argument/information presented to such a group will have had to go through chinese whispers like game. An analyst is sent to talk to a few users asking for a new feature. The analyst talks to the developers and other folk expert in the application. The analysts recommendations will be “vetted” by their manager and possibly other interested parties. The analysts recommendation is then described at the governance meeting by someone else.

    All along this line, vested interests, cognitive biases, different frames of references, initial confusion, limited expertise and experience, and a variety of other factors contribute to the original need being morphed into something completely different.

  • Up-front decision making; and
    Finally, many of these requests will have to battle against already set priorities. As part of the budgeting process, the organisation will already have decided what projects and changes it will be implementing this year. The decisions has been made. Any new requirements have to compete for whatever is left.
  • Competing priorities.
    Last in this list, but not last overall, are competing priorities. The academic attempting to implement individual student journals has as their priority improving the learning experience of the student. They are trying to get the students to engage in reflection and other good practices. This priority has to battle with other priorities.

    The head of the IT division will have as a priority of staying in budget and keeping the other senior managers happy with the performance of the IT division. Most of the IT folk will have a priority, or will be told that their priority is, to make the IT division and the head of IT look good. Similarly, and more broadly, the other senior managers on 5 year contracts will have as a priority making sure that the aims of their immediate supervisor are being seen to be achieved……..

These and other factors lead me to believe that as currently practiced, the nature of most large organisations is to be programmed. That is, when it comes to using digital technologies they are more likely to optimize the humans within the organisation for the needs of the technology.

Achieving the alternate path, optimizing the machinery for the needs of the humans and the organisation is not a simple task. It is very difficult. However, by either ignoring or being unaware of the bias of their processes, organisations are sacrificing much of the potential of digital techology. If they can’t figure out how to start programming, such organisations will end up being programmed.

References

Robey, D., Ross, W., & Boudreau, M.-C. (2002). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1), 17-46.

Powered by WordPress & Theme by Anders Norén

css.php