Assembling the heterogeneous elements for (digital) learning

Month: June 2013

Documenting the gap between "start of art" and "state of the actual"

Came across Perrotta et al (2013) in my morning random ramblings through my PLN and was particular struck by this

a rising awareness of a gap between ‘state of art’ experimental studies on learning and technology and the ‘state of the actual’ (Selwyn, 2011), that is, the messy realities of schooling where compromise, pragmatism and politics take centre stage, and where the technological transformation promised by enthusiasts over the last three decades failed to materialize. (pp. 261-262)

For my own selfish reasons (i.e. I have work within the “state of the actual”) my research interests are in understanding and figuring out how to improve the “state of the actual”. My Moodlemoot’AU 2013 presentation next week is an attempt to establish the rationale and map out one set of interventions I’m hoping to undertake. This post is about an attempt to make explicit some on-going thinking about this and related work. In particular, I’m trying to come up with a research project to document the “state of the actual” with the aim of trying to figure out how to intervene, but also, hopefully, to inform policy makers.

Some questions I need to think about

  1. What literature do I need to look at that documents the reality of working with current generation university information systems?
  2. What’s a good research method – especially data capture – to get the detail of the state of the actual?

Why this is important

A few observations can and have been made about the quality of institutional learning and teaching, especially university e-learning. These are

  1. It’s not that good.

    This is the core problem. It needs to be better.

  2. The current practices being adopted to remedy these problems aren’t working.

    Doing more of the same isn’t going to fix this problem. It’s time to look elsewhere.

  3. The workload for teaching staff is high and increasing.

    This is my personal problem, but I also think it’s indicative of a broader issue. i.e. much of the current practices aimed at improving quality assume a “blame the teacher” approach. Sure there are some pretty poor academics, but the most of the teachers I know are trying the best they can.

My proposition

Good TPACK == Good learning and teaching

Good quality learning and teaching requires good TPACK – Technological Pedagogical and Content Knowledge. The quote I use in the abstract for the Moodlemoot presentation offers a good summary (emphasis added)

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements. (Mishra & Koehler, 2006, p. 1029)

For some people the above is obvious. You can’t have quality teaching without a nuanced and context specific understanding of the complex relationships between technology, pedagogy and context. Beyond this simple statement there are a lot of different perspectives on the nature of this understanding, the nature of the three components and their relationships. For now, I’m not getting engaged in those. Instead, I’m simply arguing that

the better the quality of the TPACK, then the better the quality of the learning and teaching

Knowledge is not found (just) in the teacher

The current organisational responses to improving the quality of learning and teaching is almost entirely focused on increasing the level of TPACK held by the teacher. This is done by a variety of means

  1. Require formal teaching qualifications for all teachers.

    Because obviously, if you have a teaching qualification then you have better TPACK and the quality of your teaching will be better. Which is obviously way the online courses taught by folk from the Education disciplines are the best.

  2. Running training sessions introducing new tools.
  3. “Scaffolding” staff by requiring them to follow minimum standards and other policies.

This is where I quote Loveless (2011)

Our theoretical understandings of pedagogy have developed beyond Shulman’s early characteristics of teacher knowledge as static and located in the individual. They now incorporate understandings of the construction of knowledge through distributed cognition, design, interaction, integration, context, complexity, dialogue, conversation, concepts and relationships. (p. 304)

Better tools == Better TPACK == Better quality learning and teaching

TPACK isn’t just found in the head of the academic. It’s found in the tools, the interaction etc they engage in. The problem that interests me is that the quality of the tools etc found in the “state of the actual” within university e-learning is incredibly bad. Especially in terms of helping the generation of TPACK.

Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. Due, however, to the “machine-centered view of the design of machines and, for that matter, the understanding of people” (Norman, 1993, p. 9) our artifacts, rather than aiding cognition, “more often interferes and confuses than aids and clarifies” (p. 9). Without appropriately designed artifacts “human beings perform poorly or cannot perform at all” (Dickelman, 1995, p. 24). Norman (1993) identifies the long history of tool/artifact making amongst human beings and suggests that

The technology of artifacts is essential for the growth in human knowledge and mental capabilities (p. 5)

Documenting the “state of the actual”

So, one of the questions I’m interested in is just how well are the current artifacts being used in institutional e-learning helping “the growth in human knowledge and mental capabilities”?

For a long time, I’ve talked with a range of people about a research project that would aim to capture the experiences of those at the coal face to answer this question. The hoops I am having to currently jump through in trying to bring together a raft of disparate information systems to finalise results for 300+ students has really got me thinking about this process.

As a first step, I’m thinking I’ll take the time to document this process. Not to mention my next task which is the creation/modification of three course sites for the courses I’m teaching next semester. The combination of both these tasks at the same time could be quite revealing.

References

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Perrotta, C., & Evans, M. A. (2013). Instructional design or school politics? A discussion of “orchestration” in TEL research. Journal of Computer Assisted Learning, 29(3), 260–269. doi:10.1111/j.1365-2729.2012.00494.x

Comparing Automatically Detected Reflective Texts with Human Judgements

The following is a summary and some thoughts on the following paper

Ullmann, T. D., Wild, F., & Scott, P. (2012). Comparing Automatically Detected Reflective Texts with Human Judgements. 2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning. 7th European Conference on Technology-Enhanced Learning (pp. 101–116). Saarbruecken, Germany.

My interest in this paper is as an addition to BIM to provide scaffolding for students in their reflections and also as part of the preparation for my Moodlemoot’AU 2013 talk next week.

Of course, it also allows me to engage with one of the current fads

The automated detection of reflection is part of the broader field of learning analytics, especially social learning content analysis [13]. (Ullman et al, 2012, p. 102)

where [13] is

Ferguson, R., Shum, S.B.: Social learning analytics: five approaches. In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge. p. 23–33. LAK ’12, ACM, New York, NY, USA (2012), http://doi.acm.org/10.1145/2330601.2330616

In working through this I’m pointed to the proceedings of a 2011 workshop on “Awareness and Reflection in Learning networks”. Which is something I’ll need to return to, not to mention the 2nd and 3rd workshops.

Would appear, that just yet, this work isn’t quite ready for BIM. But who knows what has happened in the last year or so. Possibilities exist.

A distributed cognition aside

At the moment, I’m thinking that that “Moving Beyond” part of that presentation – which will show off what I’m thinking of working on with BIM – is going to be scaffolded with the following which is from Bell & Winn (2000) and is their description of Salomon’s (1995) two broad “classes” of distributed cognition as applied to artefacts in a learning environment

  1. An individual’s cognitive burden can be off-loaded onto the artefact, which may not necessarily help the individual learn about what the artefact is doing.
  2. An artefact is designed to reciprocally scaffold students in specific cognitive practices, it helps them in a process and in doing so can help them be able to perform this task without the artefact.

Ullman’s et al (2012) work would seem to fit neatly into the second of those classes. I’m hoping it (or other work) will provide the insight necessary to scaffold students in learning how to reflect.

I’m also thinking that there’s another dimension to think about in the design of BIM (and e-learning tools in general), the identity of the individual. I’m thinking there are at least three/four different identities that should be considered, they are:

  1. Student – as above, the person who’s using the tool to learn something.
  2. Teacher – the individual there to help the student. (My thinking is rooted in a formal education environment – it’s where I work – hence there is a need for a distinction here and my context also drives the remaining identities).
  3. Institutional support and management – the folk who are meant to help and ensure that all learning is of good quality.
  4. Artefact developers – the folk that develop the artefacts that are being used by the previous three roles.

I’m thinking that a tool like BIM should be concerned with providing functionality that addresses both “distributed cognition classes” for all roles.

Abstract – Ullmann et al (2012)

  • Paper reports on an experiment to automatically detect reflective and non-reflective texts.
  • 5 elements of reflection were defined and a set of indicators developed “which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts”.
  • A collection of blog posts were then run through the system and an online survey used to gather human judgements for these texts.
  • The two data sets allowed a comparison of the quality of the algorithm versus human judgements.

Introduction

Reflection is important – “at ‘the heart of key competencies’ for a successful life and a well-functioning society” (Ullman et al, 2012, p. 101).

Methods for assessing reflective writings are recent and not full established. Some quotes about how most focus has been on theorising reflection and its use, with little on how to assess reflection.

Issues with manual assessment of reflection

  1. Time-consuming.
  2. Feedback comes a long time after of reflection.
  3. Reluctance to share reflective writing given the nature of reflection.

Situating the research

This fits within learning analytics and social learning content analysis

Two prominent approaches for identifying automatically cognitive processes

  1. Connection between cue words and acts of cognition
  2. Probabilistic models and machine learning algorithms.

References to earlier work with both approaches provided.

Elements of reflection

Points out that no model of reflection is currently agreed upon. Presents 5 elements based on major stream of theoretical discussion

  1. Description of an experience.

    Sets the stage for reflection. A description of the important parts of the event. Might be a description of external events or the internal situation of the person. Common themes include: Conflict; self-awareness; and, emotions.

  2. Personal experience.

    Still some debate. But self-awareness, inner dialogue indicators.

  3. Critical analysis

    critical questions of content, process and premises of experience to correct assumptions beliefs etc.

  4. Taking perspectives into account

    Using a frame of reference based on dialogue with others, general principles, theory.

  5. Outcome of reflective writing

    New understanding/transformative and confirmatory learning. Sums up what was learned, concludes, plans for the future, new insight etc.

Acknowledges overlap between these elements.

A set of indicators were developed for each element.

Reflection detection architecture

a set of annotators – “bits of software” – developed and combined to do the analysis. Analysing the text and identify certain elements (roughly) based on keywords or other analysis.

  • NLP annotator – highlighting elements of natural language.
  • Premise (assuming that, because, deduced from) and conclusion (as a result, therefore) annotator
  • Self-reference (I me mine) and pro-noun other (he, they, others) annotators
  • Reflective vern (rethink, reason, mull over).
  • Learning outcome annotator (define, name, outline ) and Bloom’s taxonomy (remember, understand, apply, analyse)
  • Future tense (will, won’t)

An analysis component aggregates and tries to infer knowledge from the annotations. The creation of IF THEN statements used to chain lower level facts together.

This goes on until a high level rule(s) are used to connect with an element of reflection. Ended up using 16 such rules allocated to the five elements of reflection.

Method

Indicators were developed iteratively with sample texts.

Big question, what “weight should be given to each indicator to form a reflective text” (p. 108).

Used 10 texts marked as prototypical reflective writings from the reflection literature to parameterise the analytics. All this ended up with the following definition of the conditions for a reflective text

  • The indicators of the ”description of experiences” fire more than four times.
  • At least one self-related question.
  • The indicators of the ”critical analysis” element fire more than 3 times.
  • At least one indicator of the ”taking perspectives into account” fires.
  • The indicators of the ”outcome” element fire more than three times.

(p. 109)

Goes onto describe the questionnaire used as a comparison. Blog posts shown in random with questions. Human judgements reduced to 202. Data gathered by Mechanical Turk. Led to some problems, handled by filtering some.

Text corpus

Experiment was done using the “Blog Authorship Corpus” a collection of 681,288 posts, 140 millon words from 19320 bloggers from blogger.com in August 2004. Took the first 150 blog files. Short blog posts (less than 10 sentences) and foreign language posts removed.

5176 blog posts were annotated, 4m+ annotations and 170K+ inferences.

Results

Some value. Human folk more likely to agree with those identifies as reflective by the system.

One of the limiting factors is the parameterisation – the text used to do this was limited and there is no large body of quality reflective text available. Important because the parameterisation influences quality detection.

Doing more work on this.

Closes with

One possible application scenario especially useful for an educational setting is to combine the detection with a feedback component. The described reflection detection architecture with its knowledge-based analysis component can be extended to provide an explanation component, which can be used to feedback why the system thinks it is a reflective text, together with text samples as evidences.

References

Bell, P., & Winn, W. (2000). Distributed Cognitions, by Nature and by Design. In D. Jonassen & S. Land (Eds.), Theoretical Foundations of Learning Environments (pp. 123–145). Mahwah, New Jersey: Lawrence Erlbaum Associates.

Ullmann, T. D., Wild, F., & Scott, P. (2012). Comparing Automatically Detected Reflective Texts with Human Judgements. 2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning. 7th European Conference on Technology-Enhanced Learning (pp. 101–116). Saarbruecken, Germany.

Mobile video to enhance the Work Integrated Learning of pre-service teachers

Thanks to @palbion I’m having a chat this afternoon with Chris Dann from the University of Sunshine Coast about a project he’s involved with. Trying to explore and establish if there are some potential synergies there. The following is a summary and some reactions to the paper that “started it all” (Dann and Allen, 2013). The abstract looks like it gives a good summary of the idea

The prevalence of mobile technology in the lives of educators begs the question how they can be best used to improve the formative and summative assessment of pre-service teachers in workplace learning. This paper outlines how one regional university in Australia has developed a purpose-built application for iPhone with a companion web to manage communication between all stakeholders in the pre-service teachers’ practicum. The iPhone has the capability of capturing video, photographic and written data about each criterion at any time. The paper also reports on early research on its suitability. Preliminary results have indicated that the system provides increased support for supervising teachers in their decision-making processes. Pre-service teachers welcomed greater opportunities for enhanced, visual feedback providing ongoing formative feedback and improved capacity for summative assessment. Opportunities for further research and design modifications are explored.

Given the course I teach to pre-service teachers is called “ICTs and Pedagogy” and it includes a three week stint of Work-Integrated Learning (WIL), there’s certainly some potential to explore. Of course, there’s talk of the WIL component disappearing from my course.

The problem

Draws on the literature to suggest three main problems with assessment of WIL

  1. Inconsistencies in summative assessment decision-making.
  2. Little consideration of developing formative assessment to help the pre-service teacher.
  3. A need to increase validity and reliability of school-based assessment of PSTs and helping mentors know what to assess and how to do it.

In my limited experience, there’s also a problem in coming up with a process that minimises the expectations of the mentor teacher.

The authors then mention a related problem faced here, how to grade students. Initially a pass/fail, a move to a five-point graded placement in response to accreditation agencies (interesting that this didn’t happen here), but then problems with that leading to a return to pass/fail. In part because of the difficulty of moderation.

Links to broader WIL literature, including the observation “there are no simple assessment solutions to this holistic experience for students”. Also identifies two categories of assessment

  1. Learning product – e.g. lesson plans – “but these in themselves give no real indication of the worth of the teacher as a practicing professional”.

    The kludge we’re currently using is an assignment that requires the students to reflect on their lesson plans. A small step forward.

  2. performance of a skill – but this is where the inconsistency problem really arises.

The solution

Collaborative, research project to develop the “Pre-service teacher tracker (PTT)”.

Research focus includes

criteria driven by external accreditation; criterion-based assessment processes within WIL courses; supporting documentation delivery and understanding; communication and assessment alignment between university, student and supervising teacher; moderation between pre-service teachers in diverse locations; and the collection of data to enable discrimination between pre-service teachers performance

Each course has different criteria and address different parts of external accreditation.

All stakeholders can see the feedback and progress to help with communication and moderation.

System allows video recording of pre-service teacher actions with the intent to challenge student perceptions of performance. Mentor teacher can add data in the form of video, photos, comments, timelines, strategies, a rating out of 4.

The PTT has an accompanying web inteface/data store.

Findings

Supervising teachers of small pilot felt

  • Helped formatively assess and provide feedback.
  • Helped PSTs identify their own skills.

Qualitative data suggests PSTs saw it as useful, if a little challenging.

References

Dann, C. E., & Allen, B. (2013). Using mobile video technologies to enhance the assessment and learning of pre-service teachers in Work Integrated Learning (WIL). In R. McBride & M. Searson (Eds.), Society for Information Technology & Teacher Education International Conference 2013 (pp. 4231–4238). New Orleans, LA: AACE.

The kludge for marking learning journals

The following is a description of the kludge I put in place to mark the learning journals – see here for a description of initial thinking behind the journal – folk in the EDC3100 course this semester had to complete. It’s meant to record what I did, provide some food for further development and offer an opportunity for some initial reflection.

Final format

5% of each of the three course assignments contained a component titled the “learning journal”. In this, the students were expected for the relevant weeks of semester:

  • complete all the activities on the course site; and,
  • post a sequence of reflective posts on their personal blog.

As outlined in the ideas post the student’s mark was based on:

  • what percentage of the activities they completed;
  • how many posts per week (on average) they published;
  • the word count of those posts;
  • the number of posts that contained links; and,
  • the number of posts that contained links to posts from other students in the course.

The intent was to encourage connections and serendipty and minimise students having to “reflect to order” in response to specific questions/criteria. Of course, that didn’t stop many from seeking to produce exactly what was required to obtain the mark they wanted to achieve. 100 words per average means exactly that and also a bit of judicious quoting etc. Something for further reflection.

Activity completion

Each week the course site had a number of Moodle activities and resources, all with activity completion turned on. This means that Moodle tracks who has fulfilled the necessary actions to complete e.g. read a page, post to a forum etc.

The reports of activity completion aren’t particular helpful as I need to get the data into a Perl script, so the process is

  1. Download the CSV file Moodle can export of activity completion.

    The CSV file for the course I just downloaded was 1.7Mb in size.

  2. Delete the columns for the activities that don’t belong to the required period.
  3. Save it locally.

Blogs

I’ve written some Perl scripts that will parse the BIM database, evaluate the student posts and then combine that with the data from the activity completion CSV and produce the report. This report is circulated to the markers who manually copy and paste the students result into their assignment sheet. I’ve also got a version of the script that will email all the students a copy.

Of course, to get to this stage I’ve had to make sure that all of the students’ blogs are registered correctly with the version of BIM on my laptop. Then I need to

  1. Run the BIM mirror process to ensure that BIM has the most recent student posts.

    Currently 335 students have registered blogs and there are 8550 posts mirrored. For an average of about 25 posts per student. In reality, there have been a number of students withdraw from the course for a number of reasons.

  2. Dump the PHP BIM database and create a copy in the Perl database.

    Due to how I’ve got Perl and PHP installed they are using different MySQL database servers.

  3. Run either script.

The end result

Is a report that summarises results. But beyond this it’s a lot of extra work in overcoming human error that would have been removed with a decent system. I’ve spent a fair chunk of the last week dealing with these errors that mostly arise from the absence of a system giving students immediate feedback of problems including

  • Telling students they’ve registered a URL that is either not a URL or not a valid RSS feed.

    Earlier problems dealt with students making mistakes with registering their blog. BIM does this, but because BIM isn’t installed on the institutional servers I had to make do with the Moodle database activity and then manually fixing errors.

  • Warning students that their RSS feed is set to “summary” and not “full”.

    To encourage visitors to the actual blog, some blog engines have an option to set the feed to “summary” mode. A mode where only the first couple of sentences of a post is shown in the feed. This is not useful for a system like BIM that assumes it’s getting the full post. Especially when “average word count” is part of the marking mechanism.

    I’ve spent a few hours this week and more this semester helping recover this situation. BIM needs to be modified to generate warnings of this so recovery can happen earlier.

  • Students editing posts.

    Currently, once BIM gets a copy of a post it doesn’t change it. Even if the author makes a change. This caused problems because some students edited published posts to make last minute changes. This is okay but BIM’s assumptions broke the practice.

    BIM does provide students with a way to view BIM’s copy of their posts. I believe this feature helps the authors understand that the copy on BIM is different from the version on their blog. Reducing this error.

  • Allowing students to see their progress.

    This week I’ve sent all students an email with their result. BIM does provide a way for students to see their progress/marks, but with no BIM the first the students knew of what the system knew of them was when their marked assignments were returned. BIM, properly modified for the approach I’ve used here, would allow the students to see their progress and do away with the need for the email. It would allow the nipping of problems in the bud. Reducing work for me and uncertainty for the students.

Was it a success?

I’ve been wondering over the recent weeks – especially when I’ve been in the midst of the extra work that arose from having to fix the above problems – whether it was worth it. Did I make a mistake deciding to go with the blog based assessment for this course in the absence of appropriate tool support. Even if the institution had installed BIM, BIM itself didn’t have all the tools to support the approach I’ve used this semester. BIM would have reduced the workload somewhat, but additional workload would have been there.

Was it worth it was a question I asked myself when it became obvious that at least some (perhaps many) students did “write for the marks”. I need to explore this a bit further. But it is obvious that some students made sure they wrote enough to meet the criteria. There was also some level of publishing the necessary posts in the day before the assignment was due. At least some of the students weren’t engaging in the true spirit of the assessment. But I don’t blame them, there were lots of issues with the implementation of this assessment.

Starting with the absence of BIM which created additional workload which in part contributing to less than appropriate scaffolding to help the students engage in the task more meaningfully. Especially in terms of better linkages to the weekly activities. I’m particularly interested, longer term, in how the assessment of the course and the work done by the markers can be changed from making submitted assignments to actively engaging with students blog posts.

On the plus side, there was some evidence of serendipity. The requirement for for students to link to others worked to create connections and at least some of them resulted in beneficial serendipity. There’s enough evidence to suggest that this is worthwhile continuing with. There does of course need to be some more formal evaluation and reflection about how to do this, including work on BIM to address some of the problems above.

I’ve also learnt that the activity completion report in Moodle is basically useless. With the number of students I had, the number of activities to complete, and apparently the browser I was using viewing the tabular data in the activity completion report in a meaningful way was almost useless. Downloading the CSV into Excel was only slightly more beneficial. In reality, the data needed to be manipulated into another format to make it useful. Not exactly a report located in “The Performance Zone” talked about at the end of this post. On the plus side, this is informing some further research.

This whole experience really does reinforce Rushkoff’s (p. 128) point about Digital Technology

Digital technology is programmed. This makes it biased toward those with the capacity to write the code.

Without my background in programming, developing e-learning/web systems and in writing BIM none of the above would have been possible. The flip side of this point is that what is possible when it comes to e-learning within Universities is constrained by the ideas of the people who wrote the code within the various systems Universities have adopted. Importantly, this may well be at least as big a constraint on the quality of University e-learning as the intentions of the teaching staff to use the tools and the readiness of the students to adopt to changes.

References

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Learning analytics, intervention and helping teachers

It seems to be the day for a backlash against learning analytics or its parent big data. This morning my PLN has filtered to the top Taleb’s “Beware the big errors of ‘big data'” and Why big data is not truth. Not that surprising to me given that I’ve argued that learning analytics in Universities has all hallmarks of yet another fad.

As I mentioned in this presentation my argument isn’t that “Learning analytics == crap”. My argument is that “How universities (and most organisations) for that matter implement learning analytics == crap”. After all – to paraphrase some Sir Ken Robinson schtick about lifting standards – no-one is arguing against data-driven decision making (which is a main claim of big data). All decisions should be based on data. The problem is the reality of most implementations will be that the data provided by learning analytics is likely to be horribly flawed, provided to the wrong people, and used to make decisions very badly.

As it happens, Damien, Col and I are all working on various related projects that are trying to figure out how to implement learning analytics within universities in a way that doesn’t equate to animal waste product. The following summarises some thinking out loud and the following of some initial ideas and will hopefully inform what I’ll be talking about at Moodlemoot’AU 2013 in a couple of weeks.

In the end, this is more just an early exploration of Performance Support Systems literature that has shown some level of support for a direction we’re exploring, i.e. embedding learning analytics into the LMS and other tools currently being used to better enable action.

Performance support systems

A common refrain heard when institutional folk get together and chat about learning analytics goes something like this

But we already have all this data, why don’t we use it?
Because it’s all in separate systems.
So we’ll give lots of money to Vendor X to implement another piece of technology that will bring all this data together and provide dashboards for people to look at the data.

The only winner out of this approach is the vendor who chalks up another sale. The data could have been brought together via any number of technical means, probably at much less cost, and in a way that doesn’t tie the organisation to accessing the data through the reporting tool provided by the vendor. Dashboards are generally a waste of time because people don’t use them. Especially teaching staff. Even if the dashboards can provide the sort of contextual information that will help a teacher intervene with a student, the information is provided in a system that is a million miles away from the system where the teacher will intervene.

I’ve long through that the obvious lens for looking at this is the long quoted idea of “Performance Support”. The following is an initial exploration of some of the literature.

Raybould (1995, p. 11) offers the following definition with some added emphasis from me

  • Encompasses all the software needed to support the work of individuals (not just one or two specific software applications).
  • Integrates knowledge assets into the interface of the software tools, rather than separating them as add-on components. For example, company policy information may be presented in a dialog box message rather than in a separate online document.
  • Looks at the complete cycle including the capture process as well as the distribution process.
  • Includes the management of nonelectronic as well as electronic assets.

While not without flaw, the definition does get at some of what I’m interested in, including

  • Integrating all of the knowledge required to complete a task into the tool I use to complete the task.
  • Consider the complete cycle.

    i.e. just don’t build the data warehouse and expect it to be used. Think about how people can be supported in using the data. After all the EDUCAUSE definition of learning analytics is (my emphasis added)

    the use of data, statistical analysis, and explanatory and predictive models to gain insights and act on complex issues.

  • All brought together with a focus on the “work of individuals”.

    i.e. don’t provide generic dashboards and leave it to the teachers to figure out what to do. Figure out what tools can be built to help teachers perform their work better.

Can’t see the trees for the forest

Which brings up the biggest barrier. The processes, systems, structures and people set up to implement “enterprise” systems like learning analytics focus on the forest. Or if they have any conception of the trees in the forest, the trees are all pine trees of identical size, shape and requirements. Being able to see the trees, to focus on the work of the individuals, is not easy. But that’s not the point. The point is that enterprise systems and processes can never effectively focus on the “work of individuals”. They don’t even try.

And it’s important, at least for this argument. Villachica et al (2006, p. 540) argue that the purpose of PSS is “expert-like performance from day 1 with little or no training” and that this can only occur within an appropriate zone – the “performance zone”.

The Performance Zone

Oh and here’s a good quote that reinforces my point above

There is also widespread
agreement that maintaining performance within this zone requires users to be able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs. (Villachica et al, 2006, p. 540)

I can see myself using that a few times.

Kert and Kurt (2012, p. 486) cite Sleight (1998) to identify the following requirements (amongst others) of an EPSS

  • computer assisted.
  • accessible exactly at the time the task is realised.
  • in the study environment.
  • controllable by the user
  • ability to easily bring it up to date and fast access to information.

References

Kert, S. B., & Kurt, A. A. (2012). The effect of electronic performance support systems on self-regulated learning skills. Interactive Learning Environments, 20(6), 485–500.

Raybould, B. (1995). Performance Support Engineering : An Emerging Development Methodology for Enabling Organizational Learning. Performance improvement quarterly, 8(1), 7–22.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

Animation over time of links between student posts

After seeing a previous post sharing some of his visualisations of the links between blog posts of EDC3100 students, Nick provided some video showing how the links were made over time.

When I have some time it will be interesting to explore how events within the course (e.g. feedback on assignment 1 etc) impacted the connections between students.

Also interesting to explore why even at day 77 there are a couple of outliers not connected with the others.

Schools and computers: Tales of a digital romance

It’s the last week of semester, EDC3100 ICTs and Pedagogy is drawing to a close and I’m putting together the last bit of activities/resources for the students in the course. Most are focused on the last assignment and in particular a final essay that asks them to evaluate their use of ICTs while on their three week Professional Experience where they were in schools and other locations teaching. Perhaps the most challenging activity I’d like them to engage in is questioning their assumptions around learning, teaching and the application of ICTs. A particularly challenging activity given that much of what passes for the use of ICTs – including much of my own work – in formal education hasn’t been very effective at questioning assumptions.

As one of the scaffolds for this activity I am planning to point the students toward Bigum (2012) as one strategy to illustrate questioning of assumptions. The following is a summary of my attempt to extract some messages from Bigum (2012) that I think are particularly interesting in the context of EDC3100. It also tracks some meanderings around related areas of knowledge.

Background

The rapid pace of change in terms of computing is made through some stats from Google’s CEO – every two days the world produces more information “than had been produced in total from the origin of the species to 2003” (p. 16)

Yet, if you return to 30 years ago, schools had more computers than the general community. A situation that is now reversed. Later in the paper Finger and Lee (2010) is cited as finding

For the class of 30 children the total home expenditure for computing and related technologies was $438,200. The expenditure for the classroom was $24,680. Even allowing for the sharing in families, the difference between the two locations is clearly significant.

Rather than transform or revolutionise the processes and outcomes of schooling, “it is hard to suggest that anything even remotely revolutionary has actually taken place”.

But once schools adjusted to these initial perturbations,schooling continued on much as it always had. More than this, schools learnt how to domesticate new technologies (Bigum 2002 ) , or as Tyack and Cuban ( 1995 , p. 126) put it, “computers meet classrooms, classrooms win.”

This observation fits with the expressed view that

schools have consistently attempted to make sense of “new” technologies by locating them within the logics and ways of doing things with which schools were familiar. (p. 17)

and the broader view that the “grammar of school”, in particular some of Papert’s observations. In particular, the interpretation of the computer/ICTs as a “teaching machine” rather than other interpretations, in Papert’s case constructionist related.

(Side note: in revisiting Papert’s “Why School Reform is Impossible” I’ve become more aware of this distinction Papert made

“Reform” and “change” are not synonymous. Tyack and Cuban clinched my belief that the prospects really are indeed bleak for deep change coming from deliberate attempts to impose a specific new form on education. However, some changes, arguably the most important ones in social cultural spheres, come about by evolution rather than by deliberate design — by what I am inspired by Dan Dennett (1994) to call “Darwinian design.”

This has some significant implications for my own thinking that I need to revisit.)

Budding romance

The entry of micro-computers into schools around the 80s was in part enabled by their similarity to calculators that had been used since the mid 1970s.

The similarities allowed teachers to imagine how to use the new technologies in ways consistent with the old…..for a technology to fi nd acceptance it has to generate uses.

which led to the development of applications for teaching and administrative work.

This led to the rise of vendors selling applications and the marketing of computers as “an unavoidable part of the educational landscape of the future”. At this stage, computers may have become like television, radio and video players – other devices already in classrooms (connecting somewhat here with Papert’s “computers as teaching machine” comment above). But a point of difference arose from the increasing spread of computers into other parts of society as solutions to a range of problems. ICTs were increasingly linked “with such seemingly desirable characteristics as ‘improvement’, ‘efficiency’ and, by extension, educational status” (p. 19).

Perhaps the strongest current indicator of this linkage (at least for EDC3100 students) is the presence of the ICT Capability in the Australian Curriculum. Not something that has happened with the other “teaching machines”.

Hence it became increasingly rational/obvious that schools had to have computers. What was happening with computers outside schools became an “evidence surrogate” for schools, i.e.

if ICTs are doing so much for banking, newspapers, or the military, it stands to reason that they are or can do good things in schooling. (p. 20)

This leads to comparison studies, each new wave of ICTs (e.g. iPads) come hand in hand with a new raft of comparison studies. Studies that are “like comparing oranges with orangutans”.

However, despite the oft-cited “schools + computers = improvement” claim, what computers are used for in schools is always constrained by dominant beliefs about how schools should work. (p. 20)

Domestic harmony

This is where the “grammar of school” or the schema perspective comes in.

Seeing new things in terms of what we know is how humans initially make sense of the new. When cars fi rst appeared they were talked about as horseless carriages. The fi rst motion pictures were made by filming actors on a stage and so on.

School leaders and teachers make decisions about which technologies fit within schools current routines and structures. If there is no fit, then ban it. Not to mention that “the more popular a particular technology is with students the greater the chance it will be banned”.

While the adoption of ICTs into schools begins with an aim of improvement, it often ends up with “integrating them into existing routines, deploying them to meet existing goals and, generally, failing to engage with technologies in ways consistent with the world beyond the classroom” (p. 22).

Summarising the pattern

Schools enter a cycle of identifying, buying and domesticating the “next best thing” on the assumption that there will be improvements to learning. With the increasing time/cost of staying in this game, there are more attempts to measure the improvement. Factors that are not measurable get swept under the carpet.

The folly of looking for improvement

The focus on improvement “reduces much debate about computers in schools to the level of right/wrong; good/bad; improved/not improved”.

Beyond this is the idea that “ICTs change things”. Sproull and Kiesler’s (1991) research

clearly demonstrates that when you introduce a technology, a new way of doing things into a setting, things change and that seeking to “assess” the change or compare the new way of doing things with the old makes little sense

An approach that is holistic, that does not separate social and technological allows a shift from looking at what has improved to looking to see what has changed. Changes that “may have very little to do with what was hoped for or imagined”.

Three different mindsets

This type of approach enables two mindsets currently informing current debates/practice to be questioned. Those mindsets are

  1. Embrace ICTs to improve schools

    This mindset sees schools doing well in preparing students for the future. The curriculum is focused on getting the right answer and teaching is focused on how to achieve this. Research here performs comparison studies, looking for improvement and the complexities of teaching with ICTs is embodied in concepts such as TPACK.

    This is the mindset that underpins much of what is in EDC3100.

  2. Schools cannot be improved, by ICTs or any other means.

    The idea that ICTs herald a change as significant as movable type. Connections with the de-schooling movement in terms of schools, that are based on a broadcast logic, will face the same difficulties facing newspapers, record companies etc. A mindset in which improving schools is a waste of time.

Proposes a different mindset, summarised as

  • Schools face real challenges and need to change.
  • Rather replace the current single solution with another, there is a need to “encourage a proliferation of thinking about and doing school differently”.
  • There is a need to focus on change and not measurement, on the social and not just the technical.
  • That this can help disrupt traditional relationships including those between: schools and knowledge, knowledge and children, children and teachers, and learners and communities.

References

Bigum, C. (2012). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15–28). London: Springer.

Powered by WordPress & Theme by Anders Norén

css.php