Assembling the heterogeneous elements for (digital) learning

Month: March 2016

Competence with digital technology: Teacher or environment?

Apparently there’s a problem with digital skills in Australian schools. Only 52% of Year 10  students achieved a minimum standard of digital competence, and the teachers tasked to help develop that competence feel they aren’t competent. Closer to home, I’ve previously pointed out that the pre-service teachers I work with are far from digital natives harnessing digital technologies seamlessly to achieve the learning, teaching, and life goals.

Given the perceived importance of digital competence, then something must be done. Otherwise “we run the real risk of creating a generation of digitally illiterate students”.

But what?

Mcleod and Carabott suggest

explicit teaching of digital competence through professional development for teachers. This is also important in teacher education programs…

digital competence tests should also be required for teacher registration

What do I think of those suggestions?

Well they certainly have the benefit of being familiar to those involved in formal education. Expanding as they do existing ideas of testing teachers.

But I’m not sure that’s a glowing recommendation. There’s an assumption that those familiar practices are working and should be replicated in other areas.

Limited views of knowledge – blame the teacher

Beyond that they seem based on a fairly limited view of knowledge. Di Blas et al (2014) talk about the knowledge required to integrate digital technologies into teaching as having

consistently been conceptualized as being a form of knowledge that is resident in the heads of individual teachers (p. 2457)

The type of view that sees the problem with a perceive lack of digital competence to be fixed only by filling the heads of the teacher with the necessary digital competence, and then testing whether or not it’s been filled appropriately. If it hasn’t been filled properly, then it tends to be seen as the teacher’s fault.

The limitations of this view means that I don’t think that any approach based on it will be successful. (After all, a deficit model is not a great place to start).

A distributive view

In this paper (Jones, Heffernan, & Albion, 2015) some colleagues and I draw on a distributive view of learning and knowledge to explore our use as teacher educators of digital technologies in our learning and teaching. Borrowing and extending work from Putnam and Borko (2000) we see a distributive view of learning and knowledge focused on digital technologies as involving at least four conceptual themes:

  1. Learning/knowledge is situated in particular physical and social contexts;
  2. It is social in nature;
  3. It is distributed across the individual, other people, and tools; and, that
  4. Digital technologies are protean.

How does this help with the digital competence of school students, teachers, and teacher educators? It suggests we think about what these themes might reveal about the broader context within which folk are developing and using their digital competence.

Schools and digital technologies

Are schools digitally rich environments? Each year I teach about 400 pre-service teachers who head out into schools on Professional Experience for three weeks. During that time they are expected to use digital technologies to enhance and transform their students’ learning. As they prepare for this scary prospect, the most common question from my students is something like

My school has almost no (working) digital technologies? What am I going to do?

Many schools are not digitally rich environments.

If they do, then digital technologies are often seen in ways that mirror reports from Selwyn and Bulfin (2015)

Schools are highly regulated sites of digital technology use (p. 1)…

…valuing technology as

  1. something used when and where permitted;
  2. something that is standardized and preconfigured;
  3. something that conforms to institutional rather than individual needs;
  4. something that is a directed activity. (p. 15)

As teacher educators with large percentages of online students, our digital environment is significantly more rich in terms of the availability of digital technologies. However, in our 2015 paper (Jones, Heffernan, & Albion, 2015) we report that the digital technologies we use for our teaching matches the description from Selwyn and Bulfin. Our experience echoes Rushkoff’s (2010) observation that “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (p. 15). More recently I worked on a paper (Jones and Schneider, in review) with a high school teacher that identified the same problem in schools. Digital technologies that were inefficient, got in the way of effective learning and teaching, and failed to mirror the real world digital technology experience.

How do students and especially teachers learn to value and develop their digital competence in such an environment?

In the recent paper (Jones and Schneider, in review) we wondered what might happen if this environment was modified to actually enable and encourage staff and student agency with digital technologies. Allow people to optimise the technology for what they want to do, rather than optimise what they want to do to suit the technology. If this was done:

  • Would it lead to digital environments that were more effective in terms of learning and teaching?
  • Would it demonstrate the value of digital technologies and computational thinking to teachers in their practice?
  • Would this improve their digital competence?

If you could do it, I think it would positively impact all of these factors. But doing so requires to radically rethink a number of assumptions and practices that underpin most of education and the institutional use of digital technologies.

I’m not holding my breath.

Instead, I wonder how long before there’s standardised test for that.

References

Blas, N. Di, Paolini, P., Sawaya, S., & Mishra, P. (2014). Distributed TPACK: going beyond knowledge in the head. In Society for Information Technology & Teacher Education International Conference (pp. 2457–2465). Retrieved from http://www.editlib.org/p/131154

Jones, D., Heffernan, A., & Albion, P. (2015). TPACK as Shared Practice: Toward a Research Agenda,. In L. Liu & D. Gibson (Eds.), Research Highlights in Technology and Teacher Education 2015 (pp. 13–20). Waynesville, NC: AACE. Retrieved from http://www.editlib.org/d/151871

Putnam, R., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1), 4–15. Retrieved from http://www.jstor.org/stable/1176586

Selwyn, N., & Bulfin, S. (2015). Exploring school regulation of students’ technology use – rules that are made to be broken? Educational Review, 1911(December), 1–17. doi:10.1080/00131911.2015.1090401

 

Some simple analysis of student submissions

The last post outlined the process for extracting data from ~300 student submissions. This one outlines what was done to actually do some analysis on that data.

The analysis has revealed

  • Around 10% of the submissions have an issue with the URL entered.
  • About 16 lesson plans that have been evaluated by more than 1 student.
  • At least 100 students have evaluated a lesson plan found online with the Australian Curriculum Lessons site being the most popular.
  • An education-based site set up by the industry group Dairy Australia appears to be the most progressive in terms of applying a CC license to resources (apart from the OER commons site)
  • It’s enabled allocating related assignments to a single marker, but the process for doing so with the Moodle assignment management system is less than stellar.

Time to do some marking.

What?

Having extracted the data, the following tests can/should be done

  1. Is the lesson plan URL readable?
  2. Are there any lesson plans being evaluated by more than one students?
    Might be useful to allocate these to the same marker.
  3. What is the distribution of where lesson plans are sourced from? Did most use their own lesson plan?
  4. Pre-check whether the lesson can be used?

Is the lesson plan readable?

Code is simple enough, but using LWP::Simple::head is having some problems.

Let’s try LWP::UserAgent.  That’s working better.

Seems that if they successfully enter the URL it’s readable.

3 students have used file:/ as a URL – not online.

Distribution of lesson plans

Aim here is to group all the URLs based on the hostname. This will then allow the generation of some statistics about where the lesson plans are being sourced from. Findings include the following counts for domains

  • 1 – domain = UNI (a problem)
  • 3 – that don’t have a domain
  • 96 that appear to be using their own site as the course, indicating their own lesson plan
  • 19 domains indicating a lesson planning site
    Accounting for 107 students
  • 32 with some sort of ERROR

That’s still not 300. Ahh, we seem to have some problems with entering URLs correctly, common mistakes include

  • Just leaving off the http:// entirely
  • Mangling bits of http:// (e.g. ttp:// or http//)
  • Using a local file i.e. file:////
  • Having the URL as “To complete”
  • Having the URL empty

Fix those up as much as possible.  Most of these appear to have put something in the cover sheet – if they have one.

Duplicate URLs

There are 16 lesson plans that are used by more than one student.  Most by 2, 4 by 3, 1 by 4 and 1 by 5 students.

Identifying these mean I can allocate them to the same marker. Would be nice if there was an easier way to do this in Moodle.

Pre-check use

At least 107 of the students are using a lesson plan find online. The question is whether or not they can use that lesson plan as per copyright etc.

I could manually check each site, but perhaps to short cut it I should check the spreadsheet for a couple of students, see what argument they’ve mounted for fair use, and then confirm that.

The sites to check are

  • http://www.oercommons.org
    Not surprisingly under a CC-NC-SA license, though the NC is a little bit of a surprise.
  • http://www.australiancurriculumlessons.com.au
    Requires permission to upload.
  • http://readwritethink.org
    Seems to allow reuse.
  • http://www.capthat.com.au
    Copyright applies, but they’ve been good in granting permission for students to use for this assignment.
  • http://www.dairy.edu.au
    Dairy Australia showing off their online nouse to have applied a CC license.
  • http://cdn.3plearning.com
    Most students appear to have had to request permission.

Interestingly there is some large variation between people using the same site.  Should allocate these to the same marker.  Will cut down on time for them.

Setting up the analysis of student submissions

A couple of weeks ago I wrote this post outlining the design of an Excel spreadsheet EDC3100 students were asked to use for their first assignment. They’ll be using it to evaluate an ICT-based lesson plan. The assignment is due Tuesday and ~140 have submitted so far. It’s time to develop the code that’s going to help me analyse the student submissions.

Aim

The aim is to have a script that will extract each students responses from the spreadsheet they’ve submitted and place those responses into a database. From there the data can be analysed in a number of ways to help improve the efficiency and effectiveness of the marking process; and, explore some different practices (the earlier post has a few random ideas).

The script I’m working on here will need to

  1. Be given a directory path containing unpacked student assignment submissions.
  2. Parse the list of submitted files and identity all the spreadsheets
  3. Exclude those spreadsheets that have already been placed into the database.
    Eventually this will need to be configurable.
  4. For all the new spreadsheets
    1. Extract the data from the spreadsheet

At this stage, I don’t need to stick the data in a database.

Steps

  1. Code that when given a directory will extract the spreadsheet names
  2. Match the filename to a student #id.
  3. Parse an individual Excel sheet
    1. Rubric
    2. About
    3. What
    4. How
    5. Evaluation
    6. RAT
  4. Mechanism to show the values associated with question number in the sheet.
    Look at a literal data structure.
  5. Implement a test sheet
  6. See which student files will give me problems.

Extract spreadsheet names

 

This is where the “interesting” naming scheme by the institutional system will make things interesting. The format appears to be

SURNAME Firstname_idnumber_assignsubmission_file_whateverTheStudentCalledTheFile.extension

Where

  • SURNAME Firstname
    Matches the name of the student with the provided case (e.g. “JONES David”)
  • idnumber
    Appears to be the id for this particular assignment submission.
  • assignsubmission_file_
    Is a constant, there for all files.
  • whateverTheStudent…
    Is the name of the file the student used on their computer. It appears likely that some students will have been “creative” with the naming schemes.  Appears at least one student has a file name something.xlsx.docx

Match the filename to a student id

This is probably going to be the biggest problem area. I need to connect the file to an actual unique student id. The problem is that the filename doesn’t contain a unique id that is associated with the student (e.g. the Moodle user id for the student, or the institutional student number).  All it has is the unique id for the submission.

Hence I need to rely on matching the name.  This is going to cause problems if there are students with the same name, or students who have changed their name while the semester is under way. Thankfully it appears we don’t currently have that problem.

Test with 299 submitted files

Assignment due this morning – let’s test with the 299 submitted files.

Ahh, issues with people’s names: apostrophe

Problem files

Apparently 18 errors out of 297 files.  Where did the other 2 go?

“Bad” submissions include

  1. 10 with only 1 file submitted;
    All 10 only submitted the checklist. Not the cover sheet or the lesson plan.
  2. 26 with only 2 files submitted (3 total required)
    1. 25 – Didn’t submit the lesson plan
    2. 1 – Didn’t submit the checklist
    3. 0 – Didn’t submit the coversheet
  3. 18 files that appear to have the bad xlsx version problem from below.

That implies that some of the people who submitted 3 files, didn’t submit an excel file?

Oh, quite proud in a nerdy, strange way about this

 

[code lang=”bash”]
for name in `ls | cut -d_ -f2 | sort | uniq -c | sort -r | grep ‘ 3 ‘ | sed -e ‘1,$s/^.*[0-9] //’`
do
files=`ls *$name*`
echo $files | grep -q ".xls"
if [ $? -eq 1 ]
then
echo "found $name"
fi
done
[/code]

I’m assuming there will be files that can’t be read. So what are the problems.

Seem they are all down to Microsoft’s “Composite Document File V2 Format”.  These files will open in Excel, but challenge the Perl module I’m using.

Out of the 297 submitted so far, 18 have this problem.  Going to leave those for another day.

LATs, OER, TPACK, and GitHub

The following is an attempt to think about the inter-connections between the paper “Open Educational Resources (OERs) for TPACK Development” presented by Mark Hofer and Judi Harris at SITE’2016 and the Moodle OpenBook project and my own teaching.

First, is a description of what the open courses they’ve developed and what the students do. Second, is some early thinking of how this might link to EDC3100 and the Moodle open book project.

Learning Activity Types as OER/open courses

The paper offers a rationale and description of the development of two short, open courses designed to help primary and secondary pre-service teachers use learning activity types (LATs) to develop their TPACK.

Hofer and Harris (2016) describe them this way

The asynchronous, online “short courses” for preservice teachers that we have created are divided into eight brief, sequential modules…Each module begins with an overview and learning goal for the segment, and is presented as video-based content that includes narrated slides, interviews with practicing teachers, imagery, and additional online resources. Each of the videos ranges from 2-8 minutes in length, and includes verbatim closed captioning.

In completing the courses the students

  • Reflect on examples of ICT and pedagogy they’ve previously seen.
  • Select three lesson plans from a curated collection of plans from pre-service teachers.
  • Analyse those lesson plans: objectives, standards, types of learning activities, how learning is assessed, and the use of digital technologies.
  • Practice replacing one ill-fitting activity types from another sample lesson with other types of activity type that better fit the learning goal.
  • Consider substituting different technologies in the sample plan and discuss reasoning.
  • Review portions of interviews with an experience teacher.
  • Use selected plans from before to choose a LAT taxonomy and explore that taxonomy.
  • Think about replacing activity types and technolgoies and discuss.
  • Create their own lesson plan.
  • Subject their lesson plan to two self-tests called “Is it worth it?”

Hofer and Harris (2016)

We consciously erred on the side of the materials being perhaps too prescriptive and detailed for more experienced and/or advanced learners, since we suspected that it would be easier for other users to remove some of the content than to have to create additional supports.

Moodle open book and my course

In EDC3100 we cover similar ground and the content of these short courses could be a good fit. However, the model used in the course is a little different in terms of implementation. The short course content would need to be modified a bit. Something thought of by the Hofer and Harris (2016)

This is why we have released the courses in a modularized (easier-to-modify) format, along with an invitation to mix, remix, and otherwise customize the materials according to the needs of different groups of teacher-learners and the instructional preferences of their professors. The Creative Commons BY-SA license under which these short courses were released stipulates only that the original authors (and later contributors) are attributed in all succeeding derivatives of the work, and that those derivatives are released under the same BY-SA license

My course is implemented within Moodle. It uses the Moodle book module to host the content. The Moodle open book project has connected the Moodle book module with Github. The aim being to make it easier to release content in the Moodle book out to broader audiences. To enable the sharing and reuse of OERs, just like these courses.

While the technical side of the project is basically finished (it could use some more polishing before official release) there’s a large gulf between having a tool that shared Moodle book content via github and actually using it to share and reuse OERs, especially OERs that are actually used in more than one context. The LAT short courses appear to provide a perfect test bed for this.

Hofer and Harris (2016)

For teacher educators who would like to try the course “as is,” we have developed the content as a series of modules within the BlackBoard learning management system and have exported it as a content package file which can be imported into a variety of other systems. With either no changes or minor edits, the short courses in their current forms can be used within existing educational technology and teaching methods courses.

I’m assuming that the content package file will be able to be imported into Moodle, and perhaps even into the Book module.  It would be interesting to explore how well that process works and how immediately usable I (and others) think the content might be in EDC3100.

If I then make changes in response to the context and share them via the Moodle open book and Github, it would be interesting to see how useful/usable those changes and Github are to others. In particular, how useful/usable the Github version would be in comparison to the the LMS content package and the current “Weebly” versions of the courses.

I suspect that while Github provides enhanced functionality for version control (Weebly offers none), I’m not convinced that teacher educators will find that functionality accessible both in terms of technical knowledge, existing processes and practices around web content, and perhaps due to the contextual changes made.  Also, while GitHub handles multiple versions very well, the Moodle open book doesn’t yet support this well.

Putting the LAT courses into the Moodle open book seems to provide the following advantages:

  1. Provides a real test for the Moodle open book that will reveal its shortcomings.
  2. Provide a useful resource (optional for now) for EDC3100 students and also potentially for related courses I’ll need to develop in the future.
  3. Enable the community around LATs and the short courses experiment with a slightly different format.

I think I’ve convinced myself to try this out with the secondary LAT course as an initial test case. Just have to find the time to do it.

SITE'2016: LATs, OER, and SPLOTs?

SITE’2016 is almost finished, so it’s past time I started sharing some of the finds and thoughts that have arisen. There’s been a (very small) bit of movement around the notion of open. I’ll write about LATs and OER and some possibilities in another post. This post is meant to explore the possibility of adapting some of the TPACK learning activities shared by @Keane_Kelly during her session into SPLOTs.

It’s really only an exploration of what might be involved, what might be possible, and how well that might fit with the perceived needs I have in my course(s), but at the same time make something that breaks out of those confines. I’m particularly interested in Reil and Polin’s idea around residue of experiences and rich learning environments.

Over time, the residue of these experiences remains available to newcomers in the tools, tales, talk, and traditions of the group. In this way, the newcomers find a rich environment for learning. (p. 18)

As most of my teaching and software development work has had to live within an LMS, I’m also a novice at the single web page application technology (and SPLOTs).

What is a SPLOT?

A SPLOT is

Simplest Possible Learning Online Tools. SPLOTs are developed with two key principles in mind: 1) to make sharing cool stuff on the web as simple as possible, and 2) to let users do so without having to create accounts or divulge any personal data whatsoever.

The work by @cogdog builds on WordPress, but I’m wondering if something similar might be achieved using some form of single web page application?

i.e. a single web page that anyone could download and start using. No need for an account. Someone teaching a course might include this in a class. Someone with a need to learn a bit more about the topic could just use it and gain some value from it.

TPACK learning activities

Kelly’s presentation introduced four learning activities she uses to help students in an Educational Technology course develop their understanding about the TPACK framework. They are fairly simple, mostly offline, but appear to be fairly effective. My question is whether they can be translated into an online form, and an online form that is widely shareable – hence the interest in the SPLOT idea.

Vocabulary target review

In this activity the students are presented with a target (using a Google drawing) and a list of vocabulary related to TPACK (though this could be used for anything). The students then place the vocab words on the target. The more certain they are of the definition, the more “on target” the place the words. This then feeds into discussions.

At some level, through the use of Google drawing it’s already moving toward a SPLOT.

What if the students are entirely online, and especially with a tendency to asynchronous study? How might this be adapted to anyone, anytime and provide them with an access to the residue of experience of previously participants?

One approach might be something like a single web-page application that

  1. Presents that target and a list of vocab words that the user can place as appropriate.
    This list of vocab words could be hard-coded into the application. Or, perhaps the specific application (you could produce different versions for different vocab) could be linked to a Google doc or some other source of JSON data. The application gets the list of vocab words from that source.
  2. Once submitted the application could allow the user to view the mappings from previous users. This could be filtered by various ways.
    The assumption is that the application is storing the mappings of users somewhere. The representation might highlight other mappings that are related in someway to the user’s map.
  3. View provided definitions.
    Having provided their mapping the user could now gain access to definitions of the terms. There might be multiple definitions. Some put into the system at the start, some contributed by other users (see next step).
  4. Identify the most useful definitions.
    The interface might provide a mechanism by which the user can “like” definitions that help them.
  5. Provide a definition.
    Whether this occurs at this stage or earlier, the user could be asked to provide a definition for one or more terms after/prior to seeing the definitions of others.
  6. Remap their understanding.
    Having finished (more activity could be designed in the above) the user moves the words to represent any change in their understanding of the words. The system could track and display how much change has occurred and compare it with the changes reported by others.

TPACK game

The second activity is a version of the TPACK game (or this video). A game that is already available online, but not as a flexible object that people can manipulate and reuse. Immediate thought is the following might help make a “more SPLOT” version of the TPACK game

  1. Provide a single web page application that implements a variety of ways to interact with the TPACK game.
    For example,

    • The current version has people trying to identify a third element of TPACK given the other two. Which appears to be the version used by @Keane_Kelly
    • Another version might be to show a full set of three and ask people to reflect on whether or not the combination is a good fit, one they’ve seen before, not a good fit, and why.
  2. Provide the capacity to provide answers to to the application that are stored and perhaps reused.
    For example, the two different versions of the game above could be combined so that if someone suggests a particular combination in the first one that has already been “evaluated” they could be shown what others have though of it and why.
  3. Provide the capacity to share and modify the values for T, P and C.
    The current online version of the game plus @Keane_Kelly appear to have their own set of values for T, P, and C. Kelly mentioned the need to keep the Technology updated over time, but there’s broader value in keeping a growing list of values for all.  As there is also for customising some.  e.g. some technologies won’t always make sense in all environments, but in particular the content might be something to customise, e.g. for a specific curriculum or topic area.

    If it were an online application that used some sort of shared data space, it could be grown through use. It should also be possible to modify which data store is used, to support customisation to a particular context.

 

 

Mapping the digital practices of teacher educators: Implications for teacher education in changing digital landscapes

The following slides are for a (award winning no less) paper presented at SITE’2016 titled Mapping the digital practices of teacher educators: Implications for teacher education in changing digital landscapes.

 

What to expect/look for from SITE'2016?

Fountain

I’m spending this week attending the SITE’2016 conference (SITE = Society of Information Technology and Teacher Education). This is my first SITE and the following outlines some of my expectations and intent.

It’s big

Site is one of a raft of conferences run by AACE. I’ve been to two of them previously: EdMedia and Elearn. These are big conferences. 1000+ delegates. Up to be beyond 10 simultaneous sessions. Lots of in-crowds and cliques. Lots of times when there is nothing you’re really interested in, and lots of times when there are multiple things you are very interested in. A lot of really good stuff lost in the mass.

Observations that have been borne out by my first glance at the program.  Too much to take in and do justice to.

At face value, a fairly traditional large conference. With the same breadth of simple to complex, of repetition to real innovation, from boring to mind-blowing. Probably the same ratio as well.

As I’m far from being an extroverted and expert networker, I instead rely on actively trying to make some connections with what I see and I’m doing/going to do.

Join a clique?

While our paper didn’t get an overall paper award, it was successful in winning a TPACK SIG Paper Award.  Given that our previous paper was also TPACK related and won a paper award. This might suggest a “clique” with which I might have some connection, and there are a couple of related papers that sound interesting.

There also appear to be other “cliques” based around computational thinking, design thinking/technologies, and ICT integration by pre-service teachers (more generally than TPACK).  All of these are interests of mine, they connect directly to teaching.

I’m thinking a particular focus for the next few days will one identifying and sharing ideas for using digital technology in school settings with the current EDC3100 crew.

There was one explicit mention of OER in the titles. Pity I can get access to the content of talks yet (thanks to how I was registered and the closed way the organisers treat the proceedings – online now but only for those registered)

Time to get the presentation finished.

 

OEP and Initial Teacher Education: Moving on from the horsey, horseless carriage

Horsey, Horseless Carriage

Earlier this week four colleagues from three different Universities submitted an application to an internal grant scheme around Open Educational Practice. What follows is an excerpt from that application.This idea evolved out of some earlier thinking. We find out how we went in April.

If the project is successful, it needs to be open and connected. Hence pointers to interesting and related folk and work more than welcome.  Especially suggestions for potentially provocative “thought leaders, disruptors and other ragamuffins”.

Why?

The project team are all teacher educators. We come from three different institutions and bring to the project three courses from different institutions focused on the Technologies learning area.

We believe that open educational practices (OEP) have the potential to help teacher educators transform learning and teaching and respond to challenges unique to initial teacher education. We believe that OEP can help improve course development for USQ’s new Bachelor of Education. We believe that, successfully implemented, OEP can help create a generation of teachers for whom OEP is embedded in who they are and what they do.

But OEP has a horsey horseless carriage problem (Bigum, 2012). Most use of OEP is designed not to “disrupt the smooth running routines” (Bigum, 2012, p. 35) of existing educational practices and institutions. Open textbooks are still textbooks. Open courses are still courses.

We want to escape the established practices associated with OEP and initial teacher education. We want to answer questions such as:

  • What might ITE look like if it were transformed with OEP?
  • What are the challenges and potential benefits to such a radical transformation?
  • How might those be addressed and harnessed in the development of courses in USQ’s new Bachelor of Education?

 Aim

We aim to develop a range of potential scenarios where every aspect of the USQ course EDM8006 might be radically transformed through OEP. A particular focus of that transformation is on how participants from EDM8006 and the other two courses taught by project members can fruitfully engage in OEP that will connect and engage pre-service teachers across courses within institutions; across tertiary institutions; with practising teachers; with the research community; and, with the broader education profession.

 How?

To achieve this goal we plan to

  1. Be provoked by thought leaders, disruptors and other ragamuffins.
    Lure with loads of moola people that have leapt off the bleeding edge of OEP and ITE to talk via video conference and engage with the project as they can and like. Task them specifically with identifying the “dogmas of the past” holding back OEP within higher education and proposing how we might think anew, and act anew.
  2. Find out what’s already going on around OEP and ITE.
    ITE has, to varying levels, already been engaging in OEP. It’s been used in everyday practice and written about in the literature. We need to be aware of what has gone on and what is the current state of play. To try and get some idea of the propensities and dispositions within ITE.
  3. Develop an initial set of scenarios for EDM8006.
    Focused on this specific course and drawing on the inspiration of the previous two steps develop a range of different scenarios around how OEP can transform EDM8006.
  4. Share the scenarios with different stakeholders.
    Via a range of methods share the scenarios with anyone and everyone involved with the EDM8006 course, the institution, ITE, OEP, and anything else we can identify. The aim is to have the scenarios undergo critical review to identify where they will clash with established assumptions, who else might be interested in these scenarios, how they might be implemented, and hopefully much better suggestions.
  5. Distill what has been learned into a set of findings and recommendations.

References

Bigum, C. (2012). Edges , Exponentials and Education : Disenthralling the Digital. In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 29–43). Springer. doi:10.1007/978-94-007-2642-0

 

Early analysis of Moodle data

A small group of teacher educators that I work with are starting to explore some research ideas around engagement, initial teacher education, and in particular the questions that arise out of the Quality Indicators for Learning and Teaching (QILT)

For that project and others I need to get back into analysing institutional Moodle data. The following is a recording of some initial forays on a longer journey. It’s all basically data wrangling. A necessary first step to something more interesting.

To do

Much of the following is just remembering how I’ve configure my local system (a need the points to bad and interrupted practice)

  1. Do I have the data in a database?
    Yes. Postgres9.1 | Databases | studydesk2015 | moodle | tables
  2. Can I access this via Perl/PHP etc.
    Woohoo!  Yes.  Perl is a go (yes, I’m old).  ~Research/2016/QiLTEers/Analysis/myScripts
  3. Can I run some initial simple analysis
    Time to borrow some of Col’s work

Hits and grades

Let’s try the old standard, the pattern between hits on a course site and grades.

Still using the same tables for same purpose

Col’s work was from a while ago. The Moodle database schema – especially logs – has moved on.  Are his scripts work with the current data I have? Does that data I have even come with the tables his scripts use?

mdl_logstore_standard_log – tick

Ahh, but Moodle has changed its approach to logging.  My data has two tables.  The one mentioned and mdl_log (old version). That led to some wasted time – thanks for cluing me in Randip

Clicks

Experiment with 3 courses (some with 2 offerings) and see if I can get total clicks for that course.

Results are

  • 139,392 – 4th year course, S1
  • 220,362 – Big 1st year course, S1
  • 639,750 – Biggish 3rd year course, S1
  • 308,399 – Big 1st year course, S2
  • 185,675 – Biggish 3rd year course (but smaller offering), S2

Clicks per student

Raw total clicks isn’t that useful. What about the clicks per student average?

The 4th year course was showing too many students. What’s the go there?  Ahh, the query to identify students is returning duplicates.

Course Total Clicks # Students Clicks / Student
4th year course, S1 139,392 175 796.5
Big 1st year course, S1 220,362 215 1024.9
Biggish 3rd year course, S1 639,750 323 1980.7
Big 1st year course, S2 308399 451 683.8
Biggish 3rd year course, S2 185675 90 2063.1

 

Clicks and Grades

Next question is if I can produce the slightly more useful pattern between participation and grade.
Average student hits on course site/discussion forum for high staff participation courses

That seems to be working, and some success with caching.

Here’s the first semester 2015 offering of the biggish 3rd year course from the table above (a course I teach).

EDC3100_2015_1.png

And here’s the second semester 2015 offering of the same course.  The S1 offering has both on-campus and off-campus students.  The S2 offering is online only.

EDC3100_2015_2.png

What’s left to do

A lot.  But doing the above has started building the foundation scripts that will help transform the raw institutional data into something that more people can do more analysis with.

Bigger picture tasks to do are (not necessarily in this order)

  1. Polish and build out the data wrangling foundation.
    1. Identify the formats most useful for the next level of the process
    2. Improve the implementation of the scripts
    3. Build out the functionality of the scripts
  2. Identify the questions we want to explore
    1. Break down by student type
    2. Investigate the impact of timing on students
    3. LMS usage frameworks/course signatures
    4. Explore Moodle Book usage
    5. Include discussion forum participation
    6. Explore the use of links by staff and impact.

And many, many more.

 

Setting up an Excel checklist

For a brand new first assignment for EDC3100 the students are being asked to find a lesson plan that uses digital technologies to enhance learning (ICT and Pedagogy), and evaluate it against a checklist. The following documents my explorations about how to set up this checklist.

Current status

Have test version that appears to be working. Need to test it on different platforms. A task for tomorrow.

If you’re an EDC3100 student, then do try downloading it and taking a look, is it going to work for you?  Remember, it is an early test.  Need to do more testing myself.

Why?

The rationale for this assignment includes the following:

  1. Broaden the students’ awareness of what is possible with (ICT and Pedagogy).
  2. Make them aware of some ways to evaluate how ICT and Pedagogy is being used.
  3. Help them question just how they can use a resource they found online
  4. Create a “database” of information that I can analyse.

The last point may not sound all that educational, but the hope is that the ability to be able to “mine” (i.e. the ability to use digital technologies to analyse the data) the students’ responses and the markers’ judgements around this task will enable a range of new practices that will enhance/transform learning and teaching.

Some initial examples of what might be possible

  • Pre-marking checks.
    e.g. students are required to include a URL to a lesson plan. A program could check that the URL is actually correct. In a perfect world, it would warn the student before allowing them to finalise submission – but the LMS isn’t flexible enough for that.
  • Marker allocation.
    e.g. if I one of the markers has an interest mathematics, allocating her all of the maths related evaluations might be a good idea.
  • Supporting moderation and providing summary feedback.
    e.g. after marking, a program could analyse how all the students have performed and generate summary feedback. Feedback that could be used to inform the moderation process and also to provide students an overall picture of how everyone went.
  • Providing a shareable database of evaluated lesson plans.
    The assignment has 300+ students finding a lesson plan and evaluating its use of ICT and Pedagogy. These evaluations are then marked by practicing teachers. In an environment where there is abundant information, these evaluations might help focus attention on the what’s actually “good”. e.g. here’s a list of all the lessons that transform (as per the RAT framework) student learning using iCT, rather than here’s a list of lessons that use ICT.  At the very least, this could be useful within the course.

But before any of those is possible, I have to figure out an appropriate method to create the checklist.

Requirements

A good method is going to meet the following requirements

  1. Easy/efficient/familiar for the students to use.Students will need to:
    • check boxes; and,
    • write/copy and paste small sections of text.
  2. Easy/efficient/familiar for the marker
    Markers will need to

    • read and understand student responses;
    • indicate right/wrong (check boxes);
    • write small sections of text (comments/feedbac);
    • make judgements against a rubric; and,
    • calculate a total
  3. Work with the existing technology, including
    • Be submitted/returned via Moodle;
    • Be a format that can be analysed via programs.

The two most obvious options are an Excel spreadsheet, or a Word document. Yep, pandering to closed formats, but then most of the open formats break the first two requirements.

I am assuming that students will have access to Excel, that might be an ask. They may need to resort to Google Sheets or some other tools, the question will be whether doing this gets in the way of any script.

What can I analyse programatically?

I should probably double check which formats I can actually write programs to analyse.

Excel works nicely – even down to the checkbox.  Word is being more difficult.

I’ll go with Excel, though I may be pandering to my prejudices.

Setting up the checklist in Excel

The requirements are:

  • About 90 questions separated into four sections
    That might seem a bit much, but a fair number of those are covered by lists of ICT to choose what is being used.

    Having the different sections on different sheets could be useful.  Might also challenge the students, but that’s a good thing.

  • Large % of the questions are just checkboxes.
    Have tested that I can get that to work.
  • Another portion of questions require a checkbox plus a textbox to include proof
  • Small number are just a textbox
  • One has a table like structure
  • Marking
    • Almost all the questions have the marking indicating right or wrong
      What should the default value be?  Wrong or right?
    • A couple of questions require a textbox to make a judgement call
    • Would like the rubric for the assignment in the spreadsheet and for it to be auto-filled in by the marker’s actions

Set up different worksheets, that’s working.  Got a format that looks okay. And some questions going in.  Checkboxes.

Can I read the test sheet from a script?

Absolutely, multiple sheets, no worries.  Looking good.

 

Using resources appropriately

The following is intended to be an example that will be used in the course I’m teaching. It’s meant to demonstrate appropriate ways to reuse resources that have been created in different ways. It’s also an opportunity to explicitly test my understanding. So feel free to correct me.

The idea is that how and if you can use a resource (be it words, audio, video etc) depends on who created the resource, copyright, and any additional conditions that have been applied.

Using a resource I created

The following image is a photo taken by me. I’m the copyright owner, I’m free to use this anyway I like. No need to reference or give attribution.

If I’d taken this image as part of preparing teaching materials for my paid work for the University of Southern Queensland, then I would have to ask their permission to use this image here. As the University (currently) retains copyright ownership on materials produced for teaching purposes.

Eating in the bath

There’s not need to include any attribution on this image, as I own the copyright.

Using a public domain image

The following image – taken from a book from the 1800s – is in the public domain. There are no restrictions on how I (or you) can use this image.

Image from page 363 of "Encyclopédie d'histoire naturelle; ou, traité complet de cette science d'après les travaux des naturalistes les plus éminents de tous les pays et de toutes les époques: Buffon, Daubenton, Lacépède, G. Cuvier, F. Cuvier, Geoffroy Sa

With public domain resources, there’s no need for an attribution, but it would be nice to do.

Using a Creative Commons image

The following image was taken by Daisuke Tashiro. Who has chosen to add to this image this Creative Commons license which allows me to reuse the image as along as a fulfill the conditions of the license, including appropriate attribution of the image.

To properly attribute the image, I make use of the ImageCodr service.

If I were to use the above image without the attribution, just the image itself. I would be breaking the terms of the license.
However, I can currently link to the image without any attribution or breaking any copyright conditions.

Using a copyrighted image

The following image is copyrighted. All rights reserved.  While I can link to this image without breaking copyright. If I embed it in this blog post, I’m likely to get into trouble.

Unless I ask the copyright holder for permission to use the image. As I have known the copyright holder for a long time, I’ve been able to do this quite easily and quickly. However, if you don’t know the copyright holder, obtaining permission may take quite some time, and may not happen at all.

 

Copyright © (2012) Colin Beer – used with permission

If I don’t get permission from the copyright holder, I can’t use this image. Even if I put the nice attribution of the resource, I still can’t use it.

What is that last image about?

The image is a little interesting in the context of the course. It indicates that there is a potential relationship between final grade a student achieves in a course, and the week of term when the student first accesses a course website. i.e. if you access a course website in week 5, you are likely to get a grade lower than students who access the course website earlier.

 

Producing OPML files for EDC3100 student blogs

EDC3100 tries to get students engaged with writing their own blogs and following the blogs of others via a feed reader. Yes, just a bit old fashioned.

But then one of the problems with doing something a bit different is that it takes a fair bit of extra work to implement. Once you automate this bit of extra work it creates a bit of inertia that prevents change. Not only because I don’t want to lose the effort that went it into automating the process. But also because I know that if I did something different that was more modern, I’d have to invest more time in automating that process (i.e. working around the limitations of the current institutional learning environments.

So documenting the process.

Students create and register their blog on the LMS

About 250 have done this so far. Another 100 to go.  But can’t wait, need to get the OPML files out to students so they can start making connections.

Get the data that identifies students by specialisation

By default the institutional learning environment doesn’t provide this. That’s why I had to spend Friday doing this.

Though I do now have to update the data

  • Get the most recent participants.
  • Get the most recent registered blogs.
  • Double check “can’t find data” students.
    • Find a buggy example
    • Check the local course enrolment – shows up
    • Check users_extras – not there.  That’s the problem. Don’t have the extra data from student records for students who weren’t enrolled a couple of weeks before the start of semester.

Run the script

Once the data is available, I can configure and run a script that will produce the OPML files.

  • Fix the configuration settings
  • Do something with the NOPLAN students
  • Modify the script to handle change in data format and the dirty data

Test the OPML files

All appears to be good.

Write the instructions for students

That’s the next task.

Preparing my digital "learning space"

The following documents the (hopefully) last bit of extra work I have to undertake to prepare the digital “learning space” for EDC3100, ICT and Pedagogy. It’s work that has taken most of my working day. At a time when I can’t really afford it.  But it’s time I have to spend if I want to engage effectively in one of the most fundamental activities in teaching – know thy student.

End result

The work I’ve done today allows me to easily access from within the main digital learning space for EDC3100 (the Moodle course site) three different types of additional information about individual students.

It’s also an example of how the BAD mindset is able to work around the significant constraints caused by the SET mindset and in the process create shadow systems, which in turn illustrates the presence of a gap (i.e. yawning chasm) between what is provided and what is required.

The shadow system gapAdapted from Behrens and Sedera (2004)

What are they studying? What have they done before?

This student is studying Early Childhood education. They’ve completed 21 prior courses, but 5 of those were exemptions. I can see their GPA (blurred out below). They are studying via the online mode and is located in Queensland.

Screen Shot 2016-03-04 at 1.17.07 pm

How much of the course activities they’ve completed and when

This particular student is about half way through the first week’s material. They made that progress about 5 days ago. Looks like the “sharing, reflecting and connecting” resource took a while for them to complete. More so than the others – almost two hours

Screen Shot 2016-03-04 at 1.17.15 pm

What they’ve written on their blog and how they are “feeling”?

This student has written two blog posts. Both are fairly positive in the sentiment they express. Through the second is a little less positive in outlook.

Screen Shot 2016-03-04 at 1.26.04 pm

Reasons for the post

There are a number of reasons for this post:

  1. Reinforce the point about the value of an API infrastructure for sharing information between systems (and one that’s open to users).
  2. Document the huge gap that exists between the digital learning spaces universities are providing and what is actually required to implement useful pedagogies – especially when it comes to what Goodyear and Dimitriatdis (2013) call “design for orchestration” – providing support for the teacher’s work at learn time.
  3. Make sure I document the process to reduce the amount of work I have to do next time around.
  4. Demonstrate to the EDC3100 participants some of the possibilities with digital technologies, make them aware of some of what happens in the background of the course, and illustrate the benefits that can come from manipulating digital technologies for pedagogical purposes.
  5. Discover all the nastly little breaks in the routine caused by external changes (further illustrating the unstable nature of digital technologies).

What will I be doing

I’ll be duplicating a range of institutional data sources (student records and Moodle) so that I can implement a range of additional pedagogical supports, including:

Hopefully, I’ll be able to follow the process vaguely outlined from prior offerings. (Yep, that’s right. I have to repeat this process for every course offering, would be nice to automate).

Create new local Moodle course

I have a version of Moodle running on my laptop. I need to create a new course on that Moodle which will the local store for information about the students in my course.

Need to identify:

  • USQ moodle course id – 8036
  • local course id – 15
    Create the course in Moodle and get the id
  • group id – 176
    Create the group in the course
  • context id – 1635
    select * from mdl_context where instanceid=local_course_id  and contextlevel=50
  • course label – EDC3100_2016_S1
    One of the values defined when creating the course.
  • Update MoodleUsers::TRANSLATE_PARAMETERS
  • Update ActivityMapping::TRANSLATE_PARAMETERS
  • enrolid – 37
    select * from mdl_enrol where courseid=local_course_id and enrol=’manual’;

Create BIM activity in new course

Need to identify

  • bim id – 9

Enrol students in the course

Ahh, returning to Webfuse scripts, the sad, depleted remnants of my PhD.

~/webfuse/lib/BAM/3100/3100_support/participants/parse.pl is a script that will parse the Moodle participants web page, extract data about the enrolled users, and insert them appropriately into the database for my local Moodle course.

Initial test, no-one showing up as a participant. But add myself as teacher.

  1. Figure out that the “show all participants” option is hidden down the very bottom of the page.
  2. Save the page to my laptop
  3. Edit the parse.pl script to update course details
  4. Test that it parses the HTML file (in case changes have been made by the institution or by the new version of Moodle) – looking good.
  5. The finding of old students appears to be working.
    Oh nice, easy way to identify repeating students.  Need to save that data.
  6. Run the script
  7. Fix the errors
    • Duplicate key inserting into groups
    • missing required parameter COURSE_ID 111
      Complaint from MoodleUsers class – need to update TRANSLATE_PAREMETERS above
    • Particpants still not appearing, something missing — have to update the script. Done.

Took a while, but that should further automate the process for next time.

Add some extras

The above step only adds in some basic information about the student (USQ Moodle ID, email address). TO be useful I need to be able to know the sector/specialisation of the student, their postal code etc.

This information comes from a spreadsheet generated from the student records. And the data added into a “special” table in the Moodle database. This year I’m using a different method to obtain the spreadsheet, meaning that the format is slightly different. The new process was going to be automated to update each night, but that doesn’t appear to be working yet. But I have a version, will start with that.

  1. Compare the new spreadsheet content
    Some new fields: transferred_units, acad_load. Missing phone number.
  2. Add columns to extras table.
  3. Update the parsing of the file

Seems to be working

Activity data

This is to identify what activities are actually on the study desk.

Another script that parses a Moodle web page to extract data. Currently re-writing some of the activities, wonder how that will work. Actually, seem to have designed for it.  Does a replace of the list, not an update

~~/activites/parseActivity.pl

  1. Add in the course id for the new course
  2. ??? may be update the script to handle that parameterised section titles

Seems to be working

Activity completion data

Now to find out which activities each student has completed. Another script, this time parsing a CSV file produced by Moodle.

~/activities/parseCompletion.pl

  1. Update the script with new course data
  2. Unable to find course id – update ActivityMapping.pm
  3. Having problems again with matching activity names
    1. EDC3100 Springfield resources
      it shouldn’t be there. Turn off activity completion and get new CSV file
    2. For “.”???.
      First field is a . should be empty May need to watch this.
  4. Parses okay – try checkStudents
    Getting a collection of missing students.

    1. Are they in the local database at all? – no
    2. Have they withdrawn, but still in activity completion – yes.
  5. Seems to have worked

Student blog data

Yet another scraping of a Moodle web page.   ~/BIM/parseBIM.pl

  1. Update the config
  2. Check the parsing of the file
    1. Only showing a single student – the last one in the list
      For some reason, the table rows are missing a class. Only the lastrow has a class. Given I wrote the BIM code, this might be me. The parsing code assumes no class means it’s the header row.  But seems to work.
  3. Check the conversion process
    1. Crashed and burned at me – no Moodle id – hard code my exclusion
  4. Check insertion
  5. Do insertion
  6. Check BIM activity
  7. Check mirror for individual student – done
  8. Run them all – looks like there might be a proxy problem with the cron version.  Will have to do this at home – at least wait until it finishes.

Greasemonkey script

This is the user interface end of the equation.  What transforms all of the above into something useful.

/usr/local/www/mav

  • gmdocs/moreStudentDetails.user.js
    • Add the Moodle course id – line 331
  • phpdocs/api/getUserDetails.php
    • map the USQ and local Moodle ids
    • map USQ course id to BIM
    • add in the hard coded week data
    • Modify the module mapping (hard coded to the current course) — actually probably don’t need to do this.
  • Download the modified version of the greasemonkey client – http://localhost:8080/fred/mav/moreStudentDetails.user.js
  • Test it
    • Page is being updated with details link
    • Personal details being displayed
    • Activity completion not showing anything
      • Check server
        • Getting called – yes
        • Activity completion string is being produced
        • But the completion HTML is empty – problem in displayActivityStructure
        • That’s because the structure to display (from updateActivityStructure) is empty – which is actually from getActivityMapping
        • getActivityMapping
          • **** course id entered incorrectly
    • Blog posts showing error message
      Problem with type with the course id
  • Can I add in the extra bits of information – load, transferred courses
    • Client

Sentiment analysis

This is the new one, run the blog posts through indico sentiment analysis

~/BIM/sentiment.pl

  • update the BIM id

 

 

References

Behrens, S. & Sedera, W. (2004) Why do shadow systems exist after an ERP implementation? Lessons from a case study. IN WEI, C.-P. (Ed.) The 8th Pacific Asia Conference on Information Systems. Shanghai, China.

 

 

 

 

PEBKAC, mental model mismatch and ICT

Semester has commenced. First lecture yesterday. Big plans to use Zoom to “broadcast” the lecture to online students and to make a recording that could be shared with those who didn’t want to/couldn’t listen to my dulcet, droning tones at 8am on a Tuesday morning.

Zoom performed as expected. Easy to set up and get working. As expected a small issue with participants not following the advice to mute their microphones. Hence my dulcet, droning tones and the nodding off of the face-to-face audience were occasionally interrupted by the sounds of the domestic life of the online audience. Thankfully Zoom has the capability for the host of the session to mute mics of participants.

PEBKAC

The tendency for people to forget to turn off their mic appears to be an example of PEBKAC. A reasonably well known term amongst computing people, especially those in technical support. As the image to the right explains PEBKAC is an acronym that expands out to

  • Problem
  • Exists
  • Between
  • Keyboard
  • And
  • Chair

i.e. the object that exists between keyboard and chair is the user. It’s user error.

It’s a term that expresses the bewilderment of technical people when the person with the problem has done something that clearly demonstrates a lack of basic understanding. At least in part, the term arises because technical support people see this type of problem all the time.

Why is it so common?

Poor mental models as a source of PEBKAC

This is a question which the content of this week’s lecture has an answer to provide due to a combination of the nature of digital technologies (ICT) and how people learn.

Ben-Ari and colleague (Ben-Ari, 1999; Ben-Ari & Yeshno, 2006) suggest that the problem is that many people have superficial mental models of how the technology works. In the absence of a reasonable correspondence between their mental model of the technology works, and how the technology actually works people are left to “aimless trial and error” when they attempt to use digital technologies. By definition, the reliance on trial and error means that errors will occur and PEBKAC will become evident.

In the Zoom lecture experience the participants joining the Zoom lecture – perhaps many for the first time – don’t understand (they don’t have a mental model) how Zoom works. They don’t understand that their mic is on by default. When it is on, any noise made where they are is shared with all the other participants in the Zoom session. Including, the 40 odd people in the lecture theatre in Toowoomba.

So blame the user?

The opaque nature of digital technologies

Maybe we can blame the technology.

Koehler and Mishra (2009) have this to say

Digital technologies—such as computers, handheld devices, and software applications—by contrast, are protean (usable in many different ways; Papert, 1980); unstable (rapidly changing); and opaque (the inner workings are hidden from users; Turkle, 1995).On an academic level, it is easy to argue that a pencil and a software simulation are both technologies. The latter, however, is qualitatively different in that its functioning is more opaque to teachers and offers fundamentally less stability than more traditional technologies. By their very nature, newer digital technologies, which are protean, unstable, and opaque, present new challenges to teachers who are struggling to use more technology in their teaching. (p. 61)

Digital technologies are opaque. It’s not easy to get a handle on the models that underpin the design and implementation of digital technologies. It’s difficult for a student sitting at home in front of their computer to hear the sound of their neighbour’s lawn mower echoing around the R113 lecture theatre on the Toowoomba campus and connect that to their mic not being muted in Zoom.

There is a picture of a mic on the Zoom interface. But you have to click on it to see the option to mute the mic. That requirement makes it difficult for a person using Zoom for the first time (especially if they are new to video-conferencing) to be aware of how to mute the mic, let alone the need for it.

People who have a mental model that more closely corresponds to how the technology works are better able to prepare for (avoid) or solve problems.

Before starting the lecture I thought this might be a problem (based on prior experience) so I explored the Zoom interface to see if it had an feature that would allow me (as meeting host) to mute the mics of other people.  It did, and that’s what I used to address this problem.

Developing mental models through conceptual models

Ben-Ari and Yashno (2006) found that if they presented people with a conceptual model of how an ICT works, those people were able to move beyond trial and error and solve problems conceptually.

At the moment, my mental model of Zoom is that I’ll have to be manually muting participants next week. As I’m sure, even with all the recommendations, the opaque nature of the Zoom, and the limited mental models of Zoom held by people will once again create the problem.

I wonder now if Zoom has a feature by which you can specify that participants mics are muted automatically as they join.  This is an example where my mental model of Zoom breaks down. Time to play with Zoom.

Hey presto, it does indeed.  The image below shows the “Mute All” button that includes the ability to mute all participants, including new participants. I’ve just learned something new.
Unmute all

Interestingly, however, for me this reinforces the opaque nature of digital technologies (or at least their user interfaces). The button “Mute All” suggests to me that it will mute all existing participants. I didn’t assume it would include participants yet to join. Minor, but useful example.

Environment/context plays a part as well

But it’s just not the technology that is to blame.

As explained above, due to a combination of technical training and experience with video conferencing I have a fairly good mental model of how video conference works. But even with that I still make mistakes.

About 3/4 of the way through the hour lecture yesterday I realised that I hadn’t hit the record button. This is a problem as I’d planned to share that recording with those who couldn’t attend.

So if I had a good mental model of the technology, why did I make the mistake.

I blame the environment. This was my first time trying to use Zoom in a lecture theatre. Due to the theatre set up I was using a Windows computer for the presentation (I’m normally a Mac user). I also had to set up my Mac as secondary machine so I could observe the chat. I also had to worry about the lapel mic and getting that to work. Lastly, it was the first lecture of new semester. A lecture that I’d only finished preparing 30 minutes before the start of the lecture.

It was a novel environment, I was feeling rushed. So even though I knew the importance of hitting record, I didn’t hit record.

Improving my mental model

That mistake and having to re-record the lecture this morning means that I’m unlikely to make this same mistake again. Human beings learn best from making mistakes (especially public mistakes) and reflecting on them.

Broader implications

Some ad hoc ponderings and hypotheses.

The learning activities this week should be designed to require people to make mistakes and then build their conceptual models from there. The pre-packaged errors won’t be as beneficial.

Will the conceptual models that have been provided of the technology and the course be useful enough to help people develop useful mental models?

Are all the problems with staff using the Moodle Assignment activity down to this problem of opaque technology and limited mental models? Could this be fixed by sharing accessible conceptual models with staff? How do overcome the sheer complexity of the model of the Moodle assignment activity?

What role does the teaching context play in these limited mental models?

Would improving the mental models of teaching staff address the perceived quality issues around University digital learning?

References

Ben-Ari, M. (1999). Bricolage Forever! In Eleventh Workshop on the Psychology of Programming Interest Group (pp. 53–57). Leeds, UK. Retrieved from http://www.ppig.org/papers/11th- benari.pdf

Ben-Ari, M., & Yeshno, T. (2006). Conceptual Models of Software Artifacts. Interacting with Computers, 18(6), 1336–1350. doi:10.1016/j.intcom.2006.03.005

Koehler, M., & Mishra, P. (2009). What is Technological Pedagogical Content Knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. Retrieved from http://www.editlib.org/p/29544/

 

Powered by WordPress & Theme by Anders Norén

css.php