For quite some time I’ve experienced and believed that there how universities are implementing digital learning has some issues that contribute to perceived problems with the quality of such learning and its associated teaching. The following is an outline of an exploratory research project intended to confirm (or not) aspects of this belief.
The following is also thinking out loud and a work in progress. Criticisms and suggestions welcome. Fire away.
The topic of interest
Like most higher education institutions across the global, Australian universities have undertaken significant investments in corporate educational technologies (Holt et al., 2013). If there is to be any return on any investment in information technology (IT), then it is essential that the technologies are utilised effectively (Burton-Jones & Hubona, 2006). Jasperson, Carter and Zmud (2005) suggest that the potential of most information systems is underutilised and that most “users apply a narrow band of features, operate at low levels of feature use, and rarely initiate extensions of available features” (p. 525).
While Jasperson et al (2005) are talking broadly about information systems, it’s an observation that is supported by my experience and is likely to resonate with a lot of people involved in university digital/e-learning. It certainly seems to echo the quote from Prof Mark Brown I’ve been (over) using recently about e-learning
E-learning is a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly (Laxon, 2013)
Which begs the question, “Why?”.
Jasperson et al (2005) suggest that without a rich understanding of what people are doing with these information systems at “a feature level of analysis (as well as the outcomes associated with those behaviours)” after the adoption of those systems, then “it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle” (p. 549). I’m not convinced that the capability of universities to manage the post-adoptive life cycle is as good as it could be.
My experience of digital learning within Universities is that the focus is almost entirely on adoption of the technology. A lot of effort is placed into deciding which system (e.g. LMS) should be adopted. Once that decision is made that system is implemented. The focus is then on ensuring people are able to use the adopted system appropriately through the provision of documentation, training, and support. The assumption is that the system is appropriate (after all it wouldn’t have been adopted if it had any limitations) and that people just need to have the knowledge (or the compulsion) to use the system.
There are only two main types of changes made to these systems. First, is upgrades. When the adopted system is upgraded, the institution ensures that it maintains currency and upgrades. Second, are strategic changes. That is, senior management want to achieve X, system doesn’t do X, modify system to do X.
It’s my suggestion that changes to specific features of a system (e.g. LMS) that would benefit end users are either
- simply not known about; or,
Due to the organisations lack of any ability to understand what people are experiencing and doing with the features of the system.
- are starved of attention.
Since these are complex systems. Changing them is expensive. Thus only strategic changes can be made. Changes to fix features used by small subsets of people can never be seen as passing the cost/benefit analysis.
I’m interested in developing a rich understanding of the post-adoptive behaviours and experiences of university teachers using digital learning technologies. I’m working on this because I want to identify what is being done with the features of these technologies and understand what is working and what is not. It is hoped that this will reveal something interesting about the ability of universities to manage digital technologies in ways that enable effective utilization and perhaps identify areas for improvement and further exploration.
Research Questions
From that, the following research questions arise.
- How do people make use of a particular feature of the LMS?
Seeking to measure what they actually did when using the LMS for actual learning/teaching. Not what they describe they did, or what they intend to do.
- In their experience, what are the strengths and weaknesses of a particular feature?
Seeking to identify what they thought the system did to help them achieve their goal and what the system made harder.
Following on from Jasperson et al (2005) the aim is to explore these questions at a feature level. Not with the system as a whole but with how people are using a specific feature of the system. For example, what is their experience of using the Moodle Assignment module, or the Moodle Book module?
Thinking about the method(s)
So how do you answer those two questions?
Question 1 – Use
The aim is to analyse how people are actually using the feature. Not how they report their use, but how they actually use it. This suggests at least two methods
- Usability studies; or,
People are asked to complete activities using a system whilst within a controlled environment that is capturing their every move, including tracking the movement of their eyes.On the plus side, this captures very rich data. On the negative side, I don’t have access to an usability lab. There’s also the potential for this sort of testing to be removed from context. First, the test appears in the lab, a different location than the user typically uses. Second, in order to get between user comparisons it can rely on “dummy” tasks (e.g. the same empty course site).
- Learning analytics.
Analysing data gathered by the LMS about how people are using the system.On the plus side, I can probably get access to this data and there are a range of tools and advice on how to analyse it. On the negative side, the richness of the data is reduced. In particular, the user can’t be queried to discover why they performed a particular task.
Question 2 – Strengths and Weaknesses
This is where the user voice enters the picture. The aim here is to find what worked for them and what didn’t within their experience.
Appear to be three main methods
- Interviews;
On the plus side, rich data. On the negative side, “expensive” to implement and scale to largish numbers and a large geographic area.
- Surveys with largely open-ended questions; or,
On the plus side, cheaper, easier to scale to largish numbers and a large geographic area etc. On the negative side, more work on the part of the respondents (having to type their responses) and less ability to follow up on responses and potentially dig deeper.
- LMS/system community spaces.
An open source LMS like Moodle has openly available community spaces in which users/developers of the system interact. Some of the Moodle features have discussion forums where people using the feature can discuss. Content analysis of the relevant forum might reveal patterns.
The actual source code for Moodle as well as plans and discussion about the development of Moodle occur in systems that can also be analysed.
On the plus side, there is a fair bit of content in these spaces and there are established methods for analysing them. Is there a negative side?
What’s currently planned
Which translates into an initial project that is going to examine usage of the Moodle Book module (Book). This particular feature was chosen because of this current project. If anything interesting comes of this, the next plan is to repeat a similar process for the Moodle Assignment module.
Three sources of data to be analysed initially
- The Moodle database at my current institution.
Analysed to explore if and how teaching staff are using (creating, maintaining etc) the Book. What is the nature of the artefacts produced using the Book? How are learners interacting with the artefact produced using the Book?
- Responses from staff at my institution to a simple survey.
Aim being to explore relationships between the analytics and user responses.
- Responses from the broader Moodle user community to essentially the same survey.
Aim being to compare/contrast with the broader Moodle user community’s experiences with the experiences of those within the institution.
Specifics of analysis and survey
The analysis of the Book module will be exploratory. The aim is to develop analysis that is specific to the nature of the Book.
The aim of the survey is to generate textual descriptions of the users’ experience with the Book. Initial thought was given to using the Critical Incident Technique in a way similar to Islam (2014).
Currently the plan is to use a similar approach more explicitly based on the Technology Acceptance Model (TAM). The idea is that the survey will consist of a minimal number of closed questions mostly to provide demographic data. The main source of data from the survey will come from four open-ended questions, currently worded as
- Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module more useful in your teaching.
- Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module less useful in your teaching.
- Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module easier to use in your teaching.
- Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module harder to use in your teaching.
Future extensions
The analysis of Moodle usage might be usefully supplemented with interviews with particular people to explore interesting patterns of usage.
It’s also likely that the content analysis of the Moodle community discussion forum around the Book will also be completed. That’s dependent upon time and may need to wait.
Analysis of the Moodle source code repository or the tracker may also be usefully analysed. However, the focus at the moment is more on the user’s experience. The information within the repository and the tracker is likely to be a little too far away from most users of the LMS.
It would be interesting to repeat the institutionally specific analytics and survey at other institutions to further explore the impact of specific institutional actions (and just the broader contextual differences) on post-adoptive behaviour.
References
Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007
Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84
Islam, A. K. M. N. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010
Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.
Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.