Assembling the heterogeneous elements for (digital) learning

Category: lmsEvaluation Page 1 of 2

Plans for a AJET paper around the indicators project

The following is an attempt to make concrete some idea and tasks associated with writing a journal paper for the Australian Journal of Educational Technology (AJET) based on the ASCILITE’09 paper based on the work of the Indicators project. The paper will be co-authored by a group and the aim of this post is to start discussion about content of the paper and the tasks we need to do to get it written.

The purpose/abstract of the paper

One argument is that as a group of authors we should have a common sense of the purpose of the paper. We should have an elevator pitch about the paper, a 30 second spiel that summarises what we’re trying to achieve/show. Any improvements?

The use of Learning Management Systems (LMS) is almost ubiquitous within universities. However, few institutions are actively using the data generated by student and staff use of these systems to guide future work. This paper seeks to highlight:

  • Some limitations of existing attempts to leverage LMS usage data.
  • Illustrate knowledge about e-learning within universities that, because of these limitiations, appear not to be as certain or clear as existing literature or common-sense might suggest.
  • Indicate how the work within this initial paper can be used to design further work that can improve our understanding of what is happening with e-learning, why it is happening and how it can be improved.

This might become a useful tag line for the paper:

  • What is happening.
    Refers to the patterns we find in the usage data.
  • Why is it happening.
    Refers to additional research with different methods and theories that would be required to suggest and test reasons why these patterns might exist.
  • How to change practice.
    Refers to further research and insight that seeks to combine the what and why information with other theory to change how learning/teaching practice is changed.

Ideas for titles

The title of a paper is, from one perspective, a summary of the elevator pitch. It should attract the reader and inform them what they might find out. What follows are some initial ideas.

  • The indicators project: Improving the what, why and how of e-learning.
  • Student LMS activity, grades, and external factors: Identifying the need for cross-platform, cross-institutional and longitudinal analysis of LMS usage.
    This title is based on a narrowing down of the “what” to be shown in the paper to just the linkage between LMS activity, grades and external factors. i.e. exclude the feature adoption stuff.

The structure of the paper

At this stage, the structure of the paper for me is heavily based around the three aims of the paper outlined in the purpose/abstract section above. The basic structure would be:

  • General introduction.
    Explain the background/setting of the paper. i.e. widespread use of LMS as implementation for e-learning. Start of people doing academic analytics. The identification of some known patterns. The importance of knowing what is going on and how to improve it. The diversity of universities and how one size might not fit all. i.e. different universities might have different experiences.
  • Limitations of existing work.
    Seek to provide some sort of framework to understand the work that has gone before in terms of academic analytics and/or LMS usage log analysis.
  • Different perspectives on what is happening.
    Examine and explain how we’ve found patterns which seem to differ or provide alternative insights to what has already been found. The idea here is to establish at least 2 groupings of patterns that illustrate some differences between what we have found and what has been reported in the literature. Each of the groupings could have multiple patterns/findings but there would be some commonality. More on this below.
  • Further work.
    Argue that these differing findings suggest that there is value in further work that:
    • addresses the limitations identified in the 2nd section and,
      i.e. cross platform, cross-institutional and longitudinal.
    • also expands upon the findings found in the 3rd section.
      Moves from just examining the “what” into the “why” and “how”.
  • Conclusions.

The work to be done

Now time to identify the work that needs to be done.

Limitations of existing work

The basic aim of this section is to expand and formalise/abstract the knowledge/opinion about the existing literature around LMS usage analysis expressed in the ASCILITE paper.

The draft ASCILITE paper – prior to compaction due to space limitations – was working on a framework for understanding the literature based on the following dimensions:

  • # of institutions;
  • # of LMS;
  • time period;
  • Method.

Work to do:

  • Ensure that we have covered/gathered as much of the relevant literature as possible.
  • Examine that literature to see how it fits within the framework.
  • Identify from the literature any additional dimensions that might be useful.
  • Identify if there is any findings that support or contradict the findings we want to introduce in the next section.

Different perspectives on what is happening

This is the section in which we draw on the data from CQU to identify interesting and/or different patterns from what is found in the established literature. The biggest question I have about this section is, “What patterns/groupings do we use?”. The main alternatives I’m aware of are

  1. Exactly what we did in the ASCILITE paper.
  2. The slight modification we used in the ASCILITE presentation
  3. Drop the feature adoption stuff entirely and focus solely on the correlation between student activity, grade and external factors. Perhaps with the addition of some analysis from Webfuse courses.

Whichever way we go, we’ll need to:

  1. Identify and define the patterns we’re using
    e.g. The correlation between level of participation, the grade achieved and some external factors.
  2. Identify literature/references summarising what is currently known about the pattern.
    e.g. The correlation that suggests the greater the level of participation in a LMS, the better the grade.
  3. Identify ways in which the pattern can be measured (see below).
  4. Use the measure to examine the data at CQU.
    e.g. many of the graphs in the ASCILITE paper/presentation.
  5. Look for any differences between expected and what we see.
    e.g. LMS usage by HD students is less than others for AIC students and Super low courses
  6. Establish that the differences are statistically significant.
  7. Perhaps generate some initial suggestions why this might be the case.

The patterns

The patterns we’ve been using so far seem to fit into one of two categories. Each of these categories has a definition of the pattern and how we’re actually measuring it. This is also an area of difference, i.e. there could be different ways of measuring.

The patterns we’ve used so far:

  1. % of courses that have adopted different LMS features.
    This is the comparison of feature usage between Blackboard and Webfuse during the period of interest. It shows that different systems and different assumptions do modify outcomes.

    • Current measurement – Malikowski et al
    • Alternative measurement – Rankine et al (2009)
  2. The link between LMS activity, student grades and various external factors.
    We’re currently measuring this by
    • # of hits/visits on course site and discussion forum.
    • # of posts and replies to discussion forum.

    The external factors we’ve used in papers and presentations are:

    • mode of delivery: flex, AIC, CQ.
    • Level of staff participation.
    • Different staff academic background.
    • Impact of input from instructional designer.
    • Age for FLEX students.

    I think there are a range of alternative measures we could use, need to think more about these.

We need to come to some consensus about the patterns we should use.

Statistical analysis

In the meantime, however, I think that we will end up using the activity/grades correlation at least for:

  • Mode of delivery.
  • Level of staff participation.
  • Age for FLEX students.

I would suggest that having the statistical analysis done and written up for these three would be a good first step. At least while we talk about the other stuff.

Further work

In terms of the further work section of the paper I think we need to:

  • Summarise the recommendations from the literature.
  • Identify where we might disagree and why.
  • Identify what we think should be done.

In terms of further work, my suggestions would be:

  • What
    • Testing the existing patterns in cross-platform, cross-institutional and longitudinal ways.
    • Establishing and testing alternate and additional patterns. e.g. the SPAN work
    • Establishing and testing alternate measurements for these patterns
    • Testing and developing alternate methods to make these patterns and enable different institutions to use them.
    • Identifying theories that would suggest other patterns which might be useful.
    • How to make these patterns available to front-line teaching staff and academics.
  • Why
    • Lots of work seeking to explain the differences in patterns found.
    • Identifying theories that help explain the patterns.
  • How
    • How to make these patterns available to staff and students in a way that encourages and enables improvement.
    • How to encourage management not to use these patterns as a stick.

Work to do

What follows is a list of tasks, by no means complete:

  • Everyone
    • Read and comment on the elevator pitch. Does it sound right? Can it be made better? Is there a different approach?
    • Does the proposed structure work? Should there be something more?
    • Suggestions for a title.
    • Thoughts about what patterns we use in the paper.
  • Stats guy
    • Have done the analysis on the “mode of delivery”, “level of staff participation” and “Age for flex students” patterns.
    • Be able to help us write a blog post that summarises/explains the analysis for each pattern in a way that would be suitable for the journal paper.
  • Non-stats guys
    • Combine existing literature around LMS usage analysis into a single bibliography, actively try and fill the holes.
    • Analyse the literature to help develop the framework for understanding and comparing the different approaches.
    • Pull out any interesting patterns, measures or findings that either support or contradict what we’ve found.
    • Writing
      • Introduction
      • Limitations of existing work
      • Future work.

Call for participation: Getting the real stories of LMS evaluations?

The following is a call for participation from folk interesting in writing a paper or two that will tell some real stories arising from LMS evaluations.

Alternatively, if you are aware of some existing research or publications along these lines, please let me know.

LMSs and their evaluation

I think it’s safe to say that the idea of a Learning Management System (LMS) – aka Course Management System (CMS), Virtual Learning Environment (VLE) – is now just about the universal solution to e-learning for institutions of higher education. A couple of quotes to support that proposition

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai (Jones and Muldoon 2007).

LMS have become perhaps the most widely used educational technologies within universities, behind only the Internet and common office software (West, Waddoups et al. 2006).

Harrington, Gordon et al (2004) suggest that higher education has seen no other innovation result in such rapid and widespread use as the LMS. Moodle or Sakai. Almost every university is planning to make use of an LMS (Salmon, 2005).

The speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006).

Even more surprising is the almost universal adoption of just two commercial LMSes, both now owned by the same company, by Australia’s 39 universities, a sector which has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005).

Oblinger and Kidwell (2000) comment that the movement by universities to online learning was to some extent based on an almost herd-like mentality.

I also believe that increasingly most universities are going to be on their 2nd or perhaps 3rd LMS. My current institution could be said to be on its 3rd enterprise LMS. Each time there is a need for a change, the organisation has to do an evaluation of the available LMS and select one. This is not a simple task. So it’s not surprising to see a growing collection of LMS evaluations and associated literature being made available and shared. Last month, Mark Smithers and the readers of his blog did a good job of collecting links to many of these openly available evaluations through a blog post and comments.

LMS evaluations, rationality and objectivity

The assumption is that LMS evaluations are performed in a rational and objective way. That the organisation is demonstrating its rationality by objectively evaluating each available LMS and making informed decisions about which is most appropriate for it.

In the last 10 years I’ve been able to observe, participate and hear stories about numerous LMS evaluations from a diverse collection of institutions. When no-one is listening, many of those stories turn to the unspoken limitations of such evaluations. They share the inherent biases of participants, the cognitive limitations and the outright manipulations that . Stories that rarely, if ever, see the light of day in research publications. In addition, there is a lot of literature from various fields suggesting that such selection processes are often not all that rational. A colleague of mine did his PhD thesis (Jamieson, 2007) looking at these sorts of issues.

Generally, at least in my experience, when the story of an institutional LMS evaluation process is told, it is told by the people who ran the evaluation (e.g. Sturgess and Nouwens, 2004). There is nothing inherently wrong with such folk writing papers. The knowledge embodied in their papers is, generally, worthwhile. My worry is that if these are the only folk writing papers, then there will be a growing hole in the knowledge about such evaluations within the literature. The set of perspectives and stories being told about LMS evaluations will not be complete.

The proposal

For years, some colleagues and I have regularly told ourselves that we should write some papers about the real stories behind various LMS evaluations. However, we could never do it because most of our stories only came from a small set (often n=1) of institutions. The stories and the people involved could be identified simply by association. Such identification may not always be beneficial to the long-term career aspirations of the authors. There is also various problems that arise from a small sample size.

Are you interested in helping solve these problems and contribute to the knowledge about LMS evaluations (and perhaps long term use)?

How might it work?

There are any number of approaches I can think of, which one works best might depend on who (or anyone) responds to this. If there’s interest, we can figure it out from there.

References

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Harrington, C., S. Gordon, et al. (2004). “Course Management System Utilization and Implications for Practice: A National Survey of Department Chairpersons.” Online Journal of Distance Learning Administration 7(4).

Jamieson, B. (2007). Information systems decision making: factors affecting decision makers and outcomes. Faculty of Business and Informatics. Rockhampton, Central Queensland University. PhD.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Oblinger, D. and J. Kidwell (2000). “Distance learning: Are we being realistic?” EDUCAUSE Review 35(3): 30-39.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

Sturgess, P. and F. Nouwens (2004). “Evaluation of online learning management systems.” Turkish Online Journal of Distance Education 5(3).

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

How do you develop a cross-LMS usage comparison?

I recently posted about the need to develop an approach that allows for the simple and consistent comparison of usage and feature adoption between different Learning Management Systems (aka LMS, Virtual Learning Environments – VLEs – see What is an LMS?). That last post on the need didn’t really establish the need. The aim of this post is to explain the need and make some first steps in identifying how you might go about enabling this sort of comparison.

The main aim is to get my colleagues in this project thinking and writing about what they think we should and how we might do it.

What are you talking about?

Just to be clear, what I’m trying to get at is a simple method by which University X can compare how its staff and students are using its LMS with usage at University Y. The LMS at University Y might be different to that at University X. It might be the same.

They might find out that more students use discussion forums at University X. More courses at University Y might use quizzes. The could compare the number of times students visit course sites, or whether there is a correlation between contributions to a discussion forum and final grade.

Why?

The main reason is so that the university, its management, staff, students and stakeholders have some idea about how the system is being used. Especially in comparison with other universities or LMSes. This information could be used to guide decision making, identify areas for further investigation, as input into professional development programs or curriculum design projects, comparison and selection processes for a new LMS, and many other decisions.

There is a research project coming out of Portugal that has some additional questions that are somewhat related.

The main reason is that there currently appears to be no simple, effective method for comparing LMS usage between systems and institutions. The different assumptions, terms and models used by systems and institutions get in the way of appropriate comparisons.

How might it work?

At the moment, I am thinking that you need the following:

  • a model;
    An cross-platform representation of the data required to do the comparison. In the last post the model by Malikowski et al (2007) was mentioned. It’s a good start, but has doesn’t cover everything.

    As a first crack the model might include the following sets of information:

    • LMS usage data;
      Information about the visits, downloads, posts, replies, quiz attempts etc. This would have to be identified by tool because what you do with a file is different from a discussion forum, from a quiz etc.
    • course site data;
      For each course, how many files, is there a discussion forum, what discipline is the course, who are the staff, how many students etc.
    • student characteristics data;
      How were they studying, distance education, on-campus. How old were they?
  • a format;
    The model has to be in an electronic format that can be manipulated by software. The format would have to enable all the comparisons and analysis desired but maintain anonymity of the individuals and the courses.
  • conversion scripts; and
    i.e. an automated way to take institutional and LMS data stick it into the format. Conversion scripts are likely to be based around LMS and perhaps student records system. e.g. a Moodle conversion script could be used by all the institutions using Moodle.
  • comparison/analysis scripts/code.
    Whatever code/systems are required to take the information in the format and generate reports etc. that help inform decision making.

Format

I can hear some IT folk crying out for a data warehouse to be used as the format. The trouble is that there are different data warehouses and not all institution’s would have them. I believe you’d want to initially aim for a lowest common denominator, have the data in that and then allow further customisation if desired.

When it comes to the storage, manipulation and retrieval of this sort of data, I’m assuming that a relational database is the most appropriate lowest common denominator. This suggests that the initial “format” would be an SQL schema.

How would you do it?

There are two basic approaches to developing something like this:

  • big up front design; or
    Spend years analysing everything you might want to include, spend more time designing the perfect system and finally get it ready for use. Commonly used in most information technology projects and I personally think it’s only appropriate for a very small subset of projects.
  • agile/emergent development.
    Identify the smallest bit of meaningful work you can do. Do that in a way that is flexible and easy to change. Get people using it. Learn from both doing it and using it to inform the next iteration.

In our case, we’ve already done some work from two different systems for two different needs. I think discussion forums are shaping up as the next space we both need to look at, again for different reasons. So, my suggestion would be focus on discussion forums and try the following process:

  • literature review;
    Gather the literature and systems that have been written analysing discussion forums. Both L&T and external. Establish what data they require to perform their analysis.
  • systems analysis;
    Look at the various discussion forum systems we have access to and identify what data they store.
  • synthesize;
    Combine all the requirements from the first two steps into some meaningful collection.
  • peer review;
    If possible get people who know something to look at it.
  • design a database;
    Take the “model” and turn it into a “format”.
  • populate the database;
    Write some conversion scripts that will take data form the existing LMSes we’re examining and populate the database.
  • do some analysis;
    Draw on the literature review to identify the types of analysis/comparison that would be meaningful. Write scripts to perform that role.
  • reflect on what worked and repeat;
    Tweak the above on the basis of what we’ve learned.
  • publish;
    Get what we’ve done out in the literature/blogosphere for further comment and criticism.
  • attempt to gather partners.
    While we can compare two or three different LMS within the one institution. The next obvious step would be to work with some other institutions and see what insights they can share.

The knowledge and experience gained this for “discussion forums” could then be used to move onto other aspects.

What next?

We probably need to look at the following:

  • See if we can generate some outside interest.
  • Tweak the above ideas to get something usable.
  • Gather and share a bibliography of papers/work around analysing discussion forum participation.
  • Examine the discussion forum data/schema for Blackboard 6.3 and Webfuse.

That’s probably enough to be getting on about.

References

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

Identifying file distribution on Webfuse course sites

As part of the thesis I’ve been engaging with some of the literature around LMS feature usage to evaluate usage of Webfuse. A good first stab of this was reported in an earlier post. There were a number of limitations of that work, it’s time to expand a bit on it. To some extent for the PhD and to some extent because of a paper.

As with some of the other posts this one is essentially a journal or a log of what I’m doing and why I’m doing it. A permanent record of my thinking so I can come back later, if needed.

There’s even an unexpected connection with power law distributions towards the end.

Content distribution

In that previous post I did not include a graph/any figures around the use of Webfuse course sites to distribute content or files. This is because Webfuse had a concept of a default course site. i.e. every course would have the same basic default site created automatically. Since about 2001 this meant that every course site performed some aspect of information distribution including: the course synopsis on the home page, details about the course assessment, details about course resources including textbook details and a link to the course profile, and details about the teaching staff.

Beyond this staff were able to upload files and other content as they desired. i.e. moving beyond the default course site was optional and left entirely up to the teaching staff. Some of us, perhaps went overboard. Other staff may have been more minimal. The aim here is to develop metrics that illustrate that variability.

Malikowski et al (2007) have a category of LMS usage called Transmitting Content. The LMS features they include in this category include:

  • Files uploaded into the LMS.
  • Announcements posted to the course site.

So, in keeping with the idea of building on existing literature. I’ll aim to generate data around those figures. Translating those into Webfuse should be fairly straight forward, thinking includes:

  • Files uploaded into the LMS.
    Malikowski et al (2007) include both HTML files and other file types. For Webfuse and its default course sites I believe I’ll need to treat these a little differently:
    • HTML files.
      The default course sites produce HTML. I’ll need to exclude these standard HTML files.
    • Other files.
      Should be able to simply count them.
    • Real course sites.
      Webfuse also had the idea of a real course site. i.e. an empty directory into which the course coordinator could upload their own course website. This was usually used by academics teaching multimedia, but also some others, who knew what they wanted to do and didn’t like the limitations of Webfuse.
  • Announcements.
    The default course site has an RSS based announcements facility. However, some of the announcements are made be “management”. i.e. not the academics teaching the course but the middle managers responsible for a group of courses. These announcements are more administrative and apply to all students (so they get repeated in every course). In some courses they may be the only updates. These announcements are usually posted by the “webmaster”, so I’ll need to exclude those.

Implementation

I’ll treat each of these as somewhat separate.

  • Calculate # non-HTML files.
  • Calculate # of announcements – both webmaster and not.
  • Calculate # HTML files beyond default course site (I’ll postpone doing this one until later)

Calculate # non-HTML files.

Webfuse created/managed websites. So all of the files uploaded by staff exist within a traditional file system. Not in a database. With a bit of UNIX command line magic it’s easy to exact name of every file within a course site and remove those that aren’t of interest. The resulting list of files is the main data source that can then be manipulated.

The command to generate the main data source goes like this

find T1 T2 T3 -type f | get all the files for the given terms
grep -v ‘.htm$’ | grep -v ‘.html$’ | remove the HTML files
grep -v ‘CONTENT$’ | remove the Webfuse data files
grep -v .htaccess | remove Apache access restriction file
grep -v ‘updates.rss$’ | remove the RSS file used for announcements
grep -v ‘.ctb$’| grep -v ‘.ttl$’ | grep -v ‘/Boards/[^/]*$’ | grep -v ‘/Members/[^/]*$’ | grep -v ‘/Messages/[^/]*$’ | grep -v ‘/Variables/[^/]*$’ | grep -v ‘Settings.pl’ | remove files created by discussion forum
sed -e ‘1,$s/.gz$//’

The sed command at the end removes the gzip extension that has been placed on all the files in old course sites that have been archived – compressed.

The output of this command is the following

T1/COIT11133/Assessment/Assignment_2/small2.exe
T1/COIT11133/Assessment/Weekly_Tests/Results/QuizResults.xls
T1/COIT11133/Resources/ass2.exe

The next aim is to generate a file that contains the number of files for each course offering. From there the number of courses with 0 files can be identified, as can some other information. The command to do this is

sed -e ‘1,$s/^(T./………/).*$/1/’ all.Course.Files | sort | uniq -c | sort -r -n > count.Course.Files

After deleting a few entries for backup or temp directories. We have our list. Time to manipulate the data, turn it into a CSV file and into Excel. Graph below, fairly significant disparity in number of files – the type of curve looks very familiar though.

Number of uploaded files per Webfuse course site for 2005

In total, for 2005 there were 178 course sites that had files. That’s out of 299 – so 59.5%. This compares to the 50% that Col found for the Blackboard course sites in the same year.

Calculate # of Announcements

The UNIX command line alone will not solve this problem. Actually, think again, it might. What I have to do is:

  • For each updates.rss
    • count the number of posts by webmaster
    • count the number of posts by non-webmaster
    • output – courseOffering,#webmaster,#non-webmaster

Yep, a simple shell script will do it

echo COURSE,ALL,webmaster
for name in `find T1 -name updates.rss`
do
  all=`grep '' $name | wc -l`
  webmaster=`grep 'webmaster' $name | wc -l`
  echo "$name,$all,$webmaster"
done

Let’s have a look at the 2005 data. Remove some dummy data, remove extra whitespace. 100% of the courses had updates. 166 (55%) had no updates from the teaching staff, 133 (45%) did. That compares to 77% in Blackboard. Wonder if the Blackboard updates also included “webmaster” type updates?

In terms of the number of announcements contributed by the teaching staff. The following graph shows the distribution. The largest number for a single offering was 34. Based on a 12 week CQU teaching term, that’s almost, on average, 3 announcements a week

Number of coordinator announcements - Webfuse 2005

Power laws and LMS usage?

The two graphs above look very much like a power law distribution. Clay Shirky has been writing and talking about power law distributions for some time. Given that there appears to be a power law distribution going on here with usage of these two LMS features, and potentially that the same power law distribution might exist with other LMS features, what can Shirky and other theoretical writings around power law distributions tell us about LMS usage?

References

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

External factors associated with CMS adoption

This post follows on from a previous post and continues an examination of some papers written by Malikowski and colleagues examining the adoption of features of an LMS/VLE/CMS. This one focuses on the 2006 paper.

External factors associated with CMS adoption

The abstract of this paper (Malikoski, Thompson and Theis, 2006) is

Course management systems (CMSs) have become a common resource for resident courses at colleges and universities. Researchers have analyzed which CMS features faculty members use most primarily by asking them which features are used. The study described builds on previous research by counting the number of CMS features a faculty member used and by analyzing how three external factors are related to the use of CMS features. The external factors are (a) the college in which a course was offered, (b) class size, and (c) the level of a class—such as 100 or 200. The only external factor showing a statistically significant relationship to the use of CMS features was the college in which a course was offered. Another finding was that CMSs are primarily used to transmit information to students. Implications are described for using external factors to increase effective use of more complex CMS features.

Implication: repeat this analysis with the Webfuse and Blackboard courses at CQU We can do this automatically for a range of external factors beyond those.

Point of the research

Echoing what was said in the 2008 paper and one reason I am interesting in this work

Faculty members often receive help in using a CMS (Arabasz, Pirani, & Fawcett, 2003; Grant, 2004). This help typically comes from professionals who focus on instructional design or technology. Information about which features are used most could provide these professionals with a starting point for promoting effective use of more complex CMS features. Information about how external factors influence use could identify situations in which more complex features can be successfully promoted.

Prior research

Points of difference between this research and prior work are listed as:

  • Few studies have focused on the use of the CMS in resident courses.
  • Generated from surveys.
  • Morgan’s suggestion that faculty use more features over time may only be partially correct.
  • Only one study used a statistical analysis.
  • Previous studies analyse usage for all staff or for a broad array of staff – focusing on a few factors might be a contribution.
  • Lastly, add to research by including examination of how people learn.

    Currently, research into CMS use has considered CMS features, opinions from teachers about these features, and student satisfaction with CMS features. Gagné, Briggs, and Wager summarize the importance of considering both learning psychology and technology, which they refer to as “media.” They emphasize “the primacy of selecting media based on their effectiveness in supporting the learning process. Every neglect of the consideration of how learning takes place may be expected to result in a weaker procedure and poorer learning results.” (Gagné, Briggs, & Wager, 1992, p. 221). For decades, researchers have studied how teaching methods affect learning outcomes. Several recent publications describe seminal research findings, research that has built on these findings, and learning theories that have emerged from this research (Driscoll, 2005; Gagné et al., 1992; Jonassen & Association for Educational Communications and Technology, 2004; Reigeluth, 1999).

    That is as may be. But given my suspicion that most academic don’t really make a rational judgements about how they teach based on educational literature – would such an analysis be misleading and pointless?

They argue that the model from Malikowski, Thompson and Theis (2007) is what they use here and that it combines both features and theory and can be used to synthesise research.

Methodology

Interestingly, they have a spiel about causation and relationship

An important point to clarify is that the method applied in this study was not intended to determine if external factors caused the use of CMS features. Identifying causation is an important but particularly challenging research goal (Fraenkel &Wallen, 1990). Instead, the current method and study only sought to determine if significant relationships existed between external factors and the adoption of specific CMS features.

Looks like basically the same methodology and perhaps same data set as the 2008 paper. They do note some problems with the manual checking of course sites

This analysis was a labor intensive process of viewing a D2LWeb site for a particular course and completing a copy of the data collection form, by counting how often features in D2L were used. In some cases, members of the research team counted thousands of items.

Results

The definition of adoption used is different than that in the 2008 paper

In this study, a faculty member wasconsidered to have adopted a feature if at least one instance of the feature was present. For example, if a faculty member had one quiz question for students, that faculty member was considered as having adopted the quiz feature

Only 3 of the 13 LMS features available were used by more than half the faculty – grade book, news/announcements, content files. Also the only 3 features where the percentage of adoption was greater than the standard deviation.

Implication: A comparison of Webfuse usage using different definitions of adoption could be interesting as part of a way to explore what would make sense as a definition of adoption.

In some cases STDDEV was twice as large as the percentage of faculty members using a feature.

They include the following pie chart that is meant to use the model from Malikowski et al (2007). But I can’t, for the life of me, figure out how they get to it.

Categories of CMS Features

Found that only the college (discipline) could be said to be the only external factor that was a significant predictor of feature usage.

Discussion

Raises the question of norms and traditions within disciplines driving CMS feature adoption. I’m amazed more isn’t made of these being residential courses. This might play a role.

Implication: It might be argued that norms and tradition are more than just discipline based. I would argue that at CQU, when it comes to online learning that there were three main traditions based on faculty structures from the late 1990s through to early noughties:

  1. Business and Law – some courses with AIC students, very different approach to distance education and also online learning. Had a very strong faculty-based set of support folk around L&T and IT.
  2. Infocom – similar to Business and Law in terms of AIC courses and distance education. But infected by Webfuse and similar to BusLaw had a strong faculty-based set of support folk around L&T and IT.
  3. Others – essentially education, science, engineering and health. Next to no AIC students. Some had no distance education. No strong set of faculty-based support folk around IT and L&T> Though, education did have some.

Would be interesting to follow/investigate these norms and traditions and how that translated to e-learning. Especially since the faculty restructure around 2004/2005 meant there was a mixing of the cultures. BusLaw and large parts of Infocom merged. Parts of Infocom merged with education and arts….

Limitations

Study involved 81 faculty members as opposed to 862, 730, 192 and 191 in other studies. Argument is that those other studies used surveys, not the more resource intensive approach used by this work.

The recognise the problem with change

The current study analyzed CMS Web sites when they were on a live server. The limitation in this case is that a faculty member can change a Web site while it is being analyzed. Fortunately, the university at which this study occurred has faculty members create a different CMS Web site each time a course is offered.

References

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

Automating calculation of LMS/CMS/VLE feature usage – a project?

I’m in the midst of looking at the work of Malikowski et al in evaluating the usage of VLE features. The aim of this work is an attempt to provide information that can help those who help academics use VLEs. The following is an idea to address those problems and arrive at something that might be useful for cross-institutional comparisons.

Given the widespread adoption of the LMS/VLE, I’d be kind of surprised if someone hasn’t given some thought to what I’ve suggested, but I haven’t heard anything.

Do you know of a project that covers some of this?

Interested in engaging in something like this?

Their contribution

An important contribution they’ve made is to provide a useful framework for comparing feature usage between different systems and summarised the basic level of usage between the different parts of the framework. The framework is shown in the following image.

Malikowski Flow Chart

Limitations

However, there remain two important questions/problems:

  1. How do you generate the statistics to fill in the framework?
    Malikowski et al suggest that prior studies relied primarily on asking academics what they did with the LMS. They then point out that this approach is somewhat less than reliable. The adopt a better approach by visiting each course site and manually counting feature usage.

    This is not much of an improvement because of the workload involved but also the possibility of errors due to them missing usage. For example, the role in the LMS of the user visiting each course site may not be able to see everything. Alternatively, when they visit the site may change what they see e.g. an academic that deletes a particular function before term ends.

  2. What does it mean to adopt a feature?
    In Malikowski (2008) adoption is defined as using a feature more than the 25% percentile. This, I believe, is open to some problems as well.

Implications

Those limitations mean, that even with their framework, it is unlikely that a lot of organisations are going to engage in this sort of evalaution. It’s too difficult. This means less data can be compared between institutions and systems. This in turn limits reflection and knowledge.

Given the amount of money being spent on the LMS within higher education, it seems there is a need to address this problem.

One approach

The aims of the following suggestion are:

  • Automate the calculation of feature usage of LMS.
  • Enable comparison across different LMS.
  • Perhaps, include some external data.

One approach to this might be to use the model/framework from Malikowski et al as the basis for the design of a set of database tables that are LMS independent.

Then, as need arises, write a series of filters/wrappers that retrieve data from a specific LMS and inserts it into the “independent” database.

Write another series of scripts that generate useful information.

Work with a number of institutions to feed their data into the system to allow appropriate cross institutional/cross LMS comparisons.

Something I forgot – also work on defining some definition of adoption that improves upon those used by Malikowski.

Start small

We could start something like this at CQU. We have at least two historically used “LMS/VLEs” and one new one. Not to mention Col already having made
progress on specific aspects of the above.

The logical next step would be to expand to other institutions. Within Australia? ALTC?

Factors related to the breadth of use of LMS/VLE features

As a step towards thinking about how you judge the success of an LMS/VLE, this post looks at some work done by Steven Malikowski. Why his work? Well he is co-author on three journal papers that provide one perspective on the usage of features of an LMS, including one that proposes a model for research into course management systems. A list of the papers in the references section.

This post focuses on looking at the 2008 paper. On the whole, there seems to be a fair bit of space for research to extend and improve on this work.

Factors related to breadth of use

The abstract of this paper (Malikowski, 2008) is

A unique resource in course management systems (CMSs) is that they offer faculty members convenient access to a variety of integrated features. Some featurs allow faculty members to provide information to students, and others allow students to interact with each other or a computer. This diverse set of features can be used to help meet the variety of learning goals that are part of college classes. Currently, most CMS research has analyzed how and why individual CMS features are used, instead of analyzing how and why multiple features are used. The study described here reports how and why faculty members use multiple CMS features, in resident college classes. Results show that nearly half of faculty members use one feature or less. Those who use multiple features are significantly more likely to have experience with interactive technologies. Implications for using and encouraging the use of multiple CMS features are provided.

Suggests that cognitive psychology is the theoretical framework used. In particular, the idea that there are discrete categories of learning goals ranging from simple to complex and that learners that don’t master the simple, first, will have difficulties if they attempt the more complex. An analogy is made with the use of a CMS, there are simple features that need to be learned before using complex features.

In explaining previous research on adoption of features of an LMS (mostly his own quantitative evaluations) the author reports that the college/discipline an academic is in explains most variation.

How to use these findings

The point is made that a CMS is used to transmit information more than twice as much as it is used for anything else. Also, that there are cheaper and better ways to transmit information.

The suggestion is then made that

Instructional designers, researchers, and others interested in increasing effective CMS use can use the research just summarized to emphasize factors that are related to the use of uncommon CMS features and deemphasize factors that are not related to increased use.

But the best advice that is presented is that if you wish to promote use X, then encourage it in discipline Y first since they have shown interest in related features. Then, after generating insight, seek to take it elsewhere…….??

Use of multiple features

Only a small number of studies have focused on use of multiple features. Most achieved by asking academics how they use the CMS. Suggests that a second way is to visit course sites and observe which features are used. Suggests that observing behaviour is more accurate than asking them how they behave.

IMPLICATION: the approach Col and Ken are using for Blackboard and what I’m using for Webfuse is automated. Not manual. A point of departure.

Methodology

Three bits of data were used

  1. Usage of 6 common CMS features
    • Random sample of 200 staff at US institution using D2L were asked to participate – 81 chose to participate.
    • 154 D2L sites were analysed as staff teach more than one course a semester
    • 2 research team members visited and manually analysed each course site – repeating until no discrepancies.
  2. External factors: class size, the college/discipline and class level (1st, 2nd year etc)
    Gathered manually from the course site.
  3. 10 internal factors focused primarily on the faculty members’ previous experience with technology.
    Gathered by surveying staff.

Limitation: I wonder if D2L has any adaptive release mechanisms like Blackboard. Potentially, if the team member visiting each course site has an incorrectly configured user account, they may not be able to see everything within the site.

Purpose was to determine if internal or external factors were related to adoption of multiple CMS features. Established using a regression analysis with the dependent variable being the number of features adopted and the independent variables being the 3 external and 10 internal factors.

What is adoption?

This is a problem Col and I have talked about and which I’ve mentioned in some early posts looking at Webfuse usage. The definition Malikowski used in this study was

In this study, adopting a feature was defined as a situation where a D2L Web site contained enough instances of a feature so this use was at or above the 25th percentile, for a particular feature. For example, if a faculty member created a D2L Web site with 10 grade book entries, the grade book feature would have been adopted in this Web site, since the 25th percentile rank for the grade book feature is 7.00. However, if the same Web site contained 10 quiz questions, the quiz feature would not have been adopted since the 25th percentile rank for quiz questions is 12.25

I find this approach troubling. Excluding a course from adopting the quiz feature because it has only 10 questions seems harsh. What if the 10 questions were used for an important in class test and was a key component of the course. What if there are a few courses that have added all of the quiz questions provided with the textbook into the system.

Implication: There’s an opportunity to develop and argue for a different – better – approach to defining adoption.

Sample of results

  • 36% of sites used only 1 feature
  • 72% of sites used 2 or less features
  • 0% of sites used all 6 features
  • Only four of the external/internal factors could be used to predict the number of CMS features adopted
    1. Using quizzes
    2. College of social science
    3. Using asynchronous discussions
    4. Using presentation software (negative correlation)

Discussion

Suggests that the factors found to predict multiple feature use can be used to guide instructional designers to work with these faculty to determine what works before going to the others.

Limitation: I don’t find this a very convincing argument. I start to think of the technologists alliance and the difference between early adopted and the majority. The folk using multiple LMS features are likely to be very different than those not using many. Focusing too much on those already using many might lead to the development of insight that is inappropriate for the other category of user.

Implication: There seems to be some research opportunities that focuses on identifying the differences between these groups of users by actually asking them. i.e. break academics into groups based on feature usage and talk with them or ask them questions designed to bring out differences. Perhaps to test whether they are early adopters or not.

References

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

How do you measure success with institutional use of an LMS/VLE?

My PhD is essentially arguing that most institutional approaches to e-learning within higher education (i.e. the adoption and long term use of an LMS) has some significant flaws. The thesis will/does describe one attempt to formulate an approach that is better. (Aside: I will not claim that the approach is the best, in fact I’ll argue that the notion of there being “one best way” to support e-learning within a university is false.) The idea of “better” raises an interesting/important question, “How do you measure success with institutional use of an LMS?” How do you know if one approach is better than another?

These questions are important for other reasons. For example, my current institution is currently implementing Moodle as its new LMS. During the selection and implementation of Moodle there have been all sorts of claims about its impact on learning and teaching. During this implementation process, management have also been making all sorts of decisions made about how Moodle should be used and supported (many of which I disagree strongly with). How will we know if those claims are fulfilled? How will we know if those plans have worked? How will we know if we have to try something different? In the absence of any decent information about how the institutional use of the LMS is going, how can an organisation and its management make informed decisions?

This question is of increasing interest to me for a variety of reasons, but the main one is the PhD. I have to argue in the PhD and resulting publications that the approach described in my thesis is in some way better than other approaches. Other reasons include the work Col and Ken are doing on the indicators project and obviously my beliefs about what the institution is doing. Arguably, it’s within the responsibilities of my current role to engage in some thinking about this.

This post, and potentially a sequence of posts after, is an attempt to start thinking about this question. To flag an interest and start sharing thoughts.

At the moment, I plan to engage with the following bits of literature:

  • Malikowski et al and related CMS literature.
    See the references section below for more information. But there is an existing collection of literature specific to the usage of course management systems.
  • Information systems success literature.
    My original discipline of information systems has, not surprisingly a big collection of literature on how to evaluate the success of information systems. Some colleagues and I have used bits of this literature in some publications (see references).
  • Broader education and general evaluation literature.
    The previous two bodies of literature tend to focus on “system use” as the main indicator of success. There is a lot of literature around the evaluation of learning and teaching, including some arising from work done at CQU. This will need to be looked at.

Any suggestions for other places to look? Other sources of inspiration?

Why the focus on use?

Two of the three areas of literature mentioned above draw heavily on the level of use of a system in order to judge its success. Obviously, this is not the only measure of success and may not even be the best one. Though the notion of “best” is very subjective and depends on purpose.

The advantage that use brings is that it can, to a large extent, be automated. It can be easy to generate information about levels of “success” that are at least, to some extent, better than having nothing.

At the moment, most universities have nothing to guide their decision making. Changing this by providing something is going to be difficult. After all, providing the information is reasonably straight forward. Changing the mindset and processes at an institution to take these results into account when making decisions…..

Choosing a simple first step, recognising it’s limitations and then hopefully adding better measures as time progresses is a much more effective and efficient approach. It enables learning to occur during the process and also means if priorities or the context changes, you lose less as you haven’t invested the same level of resources.

In line with this is that the combination of Col’s and Ken’s work on the indicators project and my work associated with my PhD provides us with the opportunity to do some comparisons of two different systems/approaches within the same university. This sounds like a good chance to leverage existing work into new opportunities and develop some propositions about what works around the use of an LMS and what doesn’t.

Lastly, there are some good references that suggest that looking at use of these systems is a good first start. e.g. Coates et al (2005) suggest that it is the uptake and use of features, rather than their provision, that really determines their educational value.

References

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. Paper presented at the Australasian Conference on Information Systems’2005, Sydney.

Coates, H., James, R., & Baldwin, G. (2005). A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning. Tertiary Education and Management, 11(1), 19-36.

Jones, D., Cranston, M., Behrens, S., & Jamieson, K. (2005). What makes ICT implementation successful: A case study of online assignment submission. Paper presented at the ODLAA’2005, Adelaide.

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

What can history tell us about e-learning and its future?

The following contains some initial thoughts about what might turn into a paper for ASCILITE’09. It’s likely that I’ll co-author this with Col Beer.

Origins

The idea of this paper has arisen out of a combination of local factors, including:

  • The adoption of Moodle as the new LMS for our institution.
  • The indicators project Col is working on with Ken.
    Both Col and I used to support staff use of Blackboard. This project aims to do some data mining on the Blackboard system logs to better understand how and if people were using Blackboard.
  • Some of the ideas that arose from writing the past experience section of my thesis.

Abstract and premise

The premise of the paper starts with the Santayana quote

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

The idea is that there is a long history of attempting to improve learning and teaching through technology. There is a history of universities moving to new learning management systems and staff within those universities using learning management systems. In fact, our institution has over 10 years experience using learning management systems. Surely, there are some lessons within that experience that can help inform what is being done with the transition to Moodle at our institution?

The aim of the paper will be, at least, to examine that history, both broadly and specifically at our institution, and seek to identify those lessons. Perhaps the paper might evaluate the transition to Moodle at our institution and, based on that past experience, seek to suggest what some possible outcomes might be.

As you might guess from some of the following and some of what I’ve written in the past experience section of my thesis, I have a feeling that as we explore this question we are likely to find that our institution has failed Santayana’s advice on retentiveness and that the institution may be repeating the past.

Given that some of the folk directly involved in our institution’s transition to Moodle read this blog and we’ll be talking about this paper within the institution, perhaps we can play a role in avoiding that. Or perhaps, as we dig deeper, the transition is progressing better than I currently perceive.

In reality, I think we’ll avoid making specific comments on what is happening in our institution. The transition to Moodle is being run as a very traditional teleological process. This means that any activity not seen as directly contributing to the achievement of the purpose (i.e. that is not critical) will be seen as something that needs to be curtailed.

Connection with conference themes?

The paper should try and connect with the themes of the conference. Hopefully in a meaningful way, but a surface connection would suffice. The theme for the conference is “Same places, different spaces” and includes the following sub themes (I’ve included bits that might be relevant to this paper idea)

  • Blended space
    What makes blended learning effective, why, how, when and where?
  • Virtual space
    What is the impact, what are the implications and how can the potential of this emergent area be realistically assessed?
  • Social space
    What Web 2.0 technologies are teachers and students using? How well do they work, how do you know, and what can be done to improve and enhance their use?
  • Mobile space
  • Work space

Not a great fit with the sub themes but I think a connection with the theme in a round about way. Perhaps the title could be “E-learning and history: different spaces, same approaches” or something along those lines. This might have to emerge, once we’ve done some work.

Potential structure and content

What follows is an attempt to develop a structure of the paper and fill in some indicative content and/or work we have to do. It assumes an introduction that will position e-learning as a amnesiac field. This suggestion will be built around the following and similar quotes

Learning technology often seems an amnesiac field, reluctant to cite anything ‘out of date’; it is only recently that there has been a move to review previous practice, setting current developments within an historical context…many lessons learnt when studying related innovations seem lost to current researchers and practitioners. (Oliver, 2003)

I should note that the following is a first draft, an attempt to get my ideas down so Col and I can discuss it and see if we can come up with better ideas. Feel free to suggest improvements.

History of technology mediated learning and hype cycles

The aim of this section is to examine the broader history of technology-mediated learning going back to the early 1900s and drawing a small amount of content from ????.

The main aim, however, is attempt to identify a hype cycle associated with these technologies that generally results in little or no change in the practice of learning and teaching. It will draw on some of the ideas and content from here. It will also draw on related hype cycle literature including Birnbaum’s fad cycle and Gartner’s hype cycle.

E-learning usage: quantity and quality

This section will provide a summary of what we know from the literature and also from the local institution about the quantity and quality of past usage of e-learning. With a particular focus on the LMS.

Col’s indicators project has generated some interesting and depressing results from the local system. For example, out institution has a large distance education student cohort. A group of students that rarely, if ever, set foot on a campus. They study almost entirely by print-based distance education and e-learning. Recently, Col has found that 68% of those distance education students have never posted to a course discussion forum.

Paradigms of e-learning and growing abundance

The aim of this section would be to suggest that the focus on the LMS is itself rather short-sighted and does not recognise the on-going evolution of e-learning. i.e. that we’re not going to be stuck in the LMS rut for long term and perhaps the institution should be looking at that change and how it can harness it.

This section will draw on the paradigms of e-learning. It may also draw on some of the ideas contained in this TED talk by Chris Anderson around the four key stages of technology and related work.

Thinking about this brings up some memories of the 90s. I remember when friends of mine in the local area would enroll at the university in order to get Internet access and an email address. I remember when the university had to discourage students from using outside email accounts (e.g. hotmail) because they didn’t provide enough disk space.

This was because email and Internet access inside Universities was more abundant than outside. Those days are long gone. External email providers like hotmail and gmail provide large disk quotas for email than institutions. For many people, it’s increasingly cheaper to get Internet access at home. At least it’s cheaper to pay for it than pay for a university education you don’t need.

Diffusion, chasms and task corruption

Perhaps this section could be titled “Lessons”.

The idea behind this suggested section is starting to move a little beyond the historical emphasis. It’s more literature and/or idea based. So I’m not sure of its place. Perhaps it’s the history of ideas around technology. Perhaps it can fit.

The idea would be to include a list of ideas associated with e-learning:

Predictions and suggestions

This is getting to the sections that are currently more up in the air. Will it be an evaluation of the transition or will it be simply a list of more generic advice. The generic advice might be safer institutionally, better fit with the conference themes, and more more generally useful.

An initial list:

  • The adoption of Moodle will decrease the quality of learning and teaching at our institution, at least in the short term.
  • Longer term, unless there is significant activity to change the conception of learning and teaching held by the academics, the quantity and quality of use of Moodle will be somewhat similar, possibly a little better (at least quantity) than that of previous systems.
    Idea: Col, can we get some of those global figures you showed me broken down by year to see what the trend is? i.e. does it get better or worse over time?
  • Strategic specification of standards or innovation will have little or no impact on quantity and quality, will perhaps contributed to a lowest common denominator, and will likely encourage task corruption, work arounds and shadow systems.
  • Increasingly, the more engaged academics will start to use external services to supplement the features provided by the LMS.

I’m often criticised as being negative. Which is true, I believe all of my ideas have flaws, imagine what I think of the ideas of others! So, perhaps the paper should include some suggestions.

  • Focus more on contextual factors that are holding back interest in learning and teaching by academics. (See technology gravity)
  • Recognise the instructional technology chasm and take steps to design use of Moodle to engage with the pragmatists.
  • Others??

References

Oliver, M. (2003). Looking backwards, looking forwards: an Overview, some conclusions and an agenda. Learning Technology in Transition: From Individual Enthusiasm to Institutional Implementation. J. K. Seale. Lisse, Netherlands, Swets & Zeitlinger: 147-160.

Measuring the design process – implications for learning design, e-learning and university teaching

I came across the discussion underpinning these thoughts via a post by Scott Stevenson. His post was titled “Measuring the design process”. It is his take on a post titled “Goodbye Google” by Doug Bowman. Bowman was the “Visual Design Lead” at Google and has recently moved to Twitter as Creative Director.

My take of the heart of the discussion is the mismatch between the design and engineering cultures. Design is okay with relying on experience and intuition for the basis for a decision. While the engineering culture wants everything measured, tested and backed up by data.

In particular, Bowman suggests that the reason for this data-driven reliance is

a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?”

The doubt, the lack of a reason, purpose, or vision for a change creates a vacuum that needs to be filled. There needs to be some reason to point to for the decision.

When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board.

Can’t be anything wrong with that? Can there? If you’re rational and have data to back you up then you can’t be blamed. Bowman suggests that there is a problem

And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

He goes on to illustrate the point, where the focus goes to small questions – should a border be 3, 4 or 5 pixels wide – while the big questions, the important questions that can make a design distinctive become ignored. This happens because hard problems are hard and almost certainly impossible to gather objective data for.

Stevenson makes this point

Visual design is often the polar opposite of engineering: trading hard edges for subjective decisions based on gut feelings and personal experiences. It’s messy, unpredictable, and notoriously hard to measure.

Learning design, e-learning and university teaching

This same problem arises in universities around learning design, e-learning and university teaching. The design of university teaching and learning has some strong connections with visual design. It involves subjective and contextual decisions, it’s messy, unpredictable and hard to measure.

The inherently subjective and messy nature of university teaching brings it into direct tension with two increasingly important and powerful cultures within the modern university:

  1. Corporate management; and
    Since sometime in the 90s, at least within Australia, corporate manageralism has been on the rise within universities. Newton (2003) has a nice section on some of the external factors that have contributed to this rise, I’ve summarised Newton here. Further underpinning this rise has been what Birnbaum (2000) calls “education’s Second Management Revolution” from around 1960 and which “marks the ascendance of rationality in academic management”.
  2. Information technology.
    With the rise of e-learning and other enterprise systems, the corporate IT culture within universities is increasingly strong. In particular, from my cynical perspective, when they can talk the same “rational” talk as the management culture and back this up with reams of data (regardless of validity) and can always resort of techno-babble to confuse management.

Both these cultures put an emphasis on rationality, on having data to support decisions and on being able to quantify things.

Symptoms of this problem

Just taking the last couple of years, I’ve seen the following symptoms of this:

  • The desire to have a fixed, up-front estimate of how long it takes to re-design a course.
    I want you to re-design 4 courses. How long will it take?
  • The attempt to achieve quality through consistency.
    This is such a fundamentally flawed idea, but it is still around. Sometimes it is proposed by people who should know better. The idea that a single course design, word template or educational theory is suitable for all courses at an institution, let alone all learners, sounds good, but doesn’t work.
  • Reports indicating that the re-design and conversion of courses to a new LMS are XX% complete.
    Heard about this just recently. If you are re-designing a raft of different courses, taught be different people, in different disciplines, using different approaches and then porting them to a new LMS, how can you say it is XX% complete. The variety in courses will mean that you can’t quantify how long it will take. You might have 5 or 10 courses completed, but that doesn’t mean you’re 50% completed. The last 5 courses might take much longer.
  • The use of a checklist to evaluate LMSes.
    This has to be the ultimate, use a check list to reduce the performance of an LMS to a single number!!
  • Designing innovation by going out to ask people what they want.
    For example, let’s go and ask students or staff how they want to use Web 2.0 tools in their learning and teaching. That old “fordist”, the archetypal example of rationalism, Henry Ford even knew better than this

    “If I had asked people what they wanted, they would have said faster horses.”

The scary thing is, because design is messy and hard, the rational folk don’t want to deal with it. Much easier to deal with the data and quantifiable problems.

Of course, the trouble with this is summarised by a sign that used to hang in Einstein’s office at Princeton (apparently)

Not everything that counts can be counted, and not everything that can be counted counts.

Results

This mismatch between rationality and the nature of learning and teaching leads, from my perspective, to most of the problems facing universities around teaching. Task corruption and a reliance on “blame the teacher”/prescription approaches to improving teaching arise from this mismatch.

This mismatch arises, I believe, for much the same reason as Bowman used in his post about Google. The IT and management folk don’t have any convictions or understanding about teaching or, perhaps, about leading academics. Consequently, they fall back onto the age-old (and disproved) management/rational techniques. As they give the appearance of rationality.

References

Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco, Jossey-Bass.

The myth of rationality in the selection of learning management systems/VLEs

Choices

Over the last 10 to 15 years I’ve been able to observe at reasonably close quarters at least 3 processes to select a learning management system/virtual learning environment (LMS/VLE) for a university. During the same time I’ve had the opportunity to sit through presentations and read papers provided by people who had led their organisation through the same process.

One feature that the vast majority of these processes have reportedly had was objectivity. They were supposedly rational processes where all available data was closely analysed and a consensus decision was made.

Of course, given what I think about people and rationality it is of little surprise that I very much doubt that any of these processes could ever be rational. I think most of the folk claiming that it was rational are simply trying to dress it up, mainly because society and potentially their “competitors” within the organisation expect them to be, or at least appear to be, rational.

I don’t blame them. The vast majority, if not all, of what is taught in information systems/technology, software development and management automatically assumes that people are rational. It’s much easier to give the appearance of rationality. This really is a form of task corruption, in this case the simulation “type” of task corruption.

The reality?

So, if it isn’t rational and neat, what is it? Well messy and contingent and highly dependent on the people involved, their agendas and their relative ability to influence the process. And I’ve just come across probably the first paper (Jones, 2008 – and no, I’m not the author) that attempts to engage with and describe the messiness of the process.

It’s also somewhat appropriate as it provides one description of the process used by the Open University in the UK to adopt Moodle, the same LMS my current institution has selected.

The paper concludes with the following

There is no one authoritative voice in this process and whilst the process of infrastructural development and renewal can seem to be the outcome of a plan the process is one that is negotiated between powerful institutional interests that have their roots in different roles within the university. Negotiation is not only between units and the process of decision making is also affected by the sequence of time in taking decisions, for example by who is in post when key decisions are taken. Decisions taken in terms of the technological solutions for infrastructural development have definite consequences in terms of the affordances and constraints that deployed technologies have in relation to local practices. The strengths and weaknesses of an infrastructure seem to reside in a complex interaction of time, artefacts and practices.

Implications?

If we know that, even in the best of situations, human beings are not rational, and we know that in situations involving complex problems involving multiple perspectives, that the chances of a rational, objective decision is almost possible then:

  • Why do we insist on this veneer of rationality?
  • Why do we enter into processes like an LMS evaluation and selection using processes that assume everyone is rational?
  • Are there not processes that we can use that recognise that we’re not rational and that work within those confines?

Comment on Moodle

The paper includes the following quotes from a couple of senior managers at the Open University. When asked about the weakness of the approach the OU were taken, one senior manager responded

Weakness ? …the real weakness is probably in the underlying platform that we’ve chosen to use, Moodle. That’s probably the biggest weakness, and I think we made the right decision to adopt Moodle when we did. There wasn’t another way of doing it.

Then a senior manager in learning and teaching had this to say, continuing the trend.

Where Moodle was deficient was in the actual tools within it, as the functionalities of the tools were very basic. It was also very much designed for – in effect – classroom online. It’s a single academic teaching to a cohort of students. Everything’s based around the course rather than the individual student. So it’s teaching to a cohort rather than to an individual, so a lot of the work has gone in developing, for example, a much more sophisticated roles and permissions capability. There really are only 3 roles administrator, instructor, and student, but we have multiple roles…

This is particularly interesting as my current institution has some similarities with the OU in terms of multiple likely roles.

Of course, given that organisations are rational, I’ll be able to point out this flaw to the project team handling the migration to the new LMS. They will investigate the matter (if they don’t already know about it), and if it’s a major problem incorporate a plan to address it before the pilot, or at least the final migration.

Of course, that’s forgetting the SNAFU principle and the tension between innovation and accountability and its effects on rationality.

Addendum

It has been pointed out to me that the penultimate paragraph in the previous section, while making the point about my theoretical views of organisations and projects, does not necessarily represent a collegial, or at least vaguely positive, engagement with what is a hugely difficult process.

To that end, I have used formal channels to make the LMS implementation team aware of the issue raised in Jones (2008).

I have also thought about whether or not I should delete/modify the offending paragraph and have decided against it. There will always be ways to retrieve the original content and leaving both the paragraph and the addendum seems a more honest approach to dealing with it.

I also believe it can make a point about organisations, information systems projects and the information flows between users, developers and project boards. The SNAFU principle and various other issues such as task corruption do apply in these instances. Participants in such projects always bring very different perspectives and experiences, both historically and of the project and its evolution.

To often, in the push to appear rational the concerns and perspectives of some participants will be sidelined. Often this creates a sense of powerlessness and other feelings that don’t necessarily increase the sense of inclusion and ownership of the project that is typically wanted. Often the emphasis becomes “shoot the messenger” rather than deal with the fundamental issues and limitations of the approaches being used.

The push to be a team player is often code for “toe the company line”, a practice that only further increases task corruption.

I have always taken the approach of being open and transparent in my views. I generally attempt to retain a respectful note when expressing those views, but sometime, especially in the current context, that level may not meet the requirements of some. For that I apologise.

However, can you also see how even now, I’m struggling with the same issues as summarised in the SNAFU principle? Should I take more care with what I post. To an extent of avoiding any comments that might be troubling for some? Since, if I’m too troubling, it might come back and bite me.

Or is it simply a case of me being rude and disrespectful and deserving of a bit of “bite me”?

What do you think? Have your say.

References

Jones, C. (2008). Infrastructures, institutions and networked learning. 6th International Conference on Networked Learning, Halkidiki, Greece.

Some ideas for e-learning indicators – releasing half-baked ideas

The following is a quick mind dump of ideas that have arisen today about how you might make use of the usage data and content of course websites from course management systems (CMS) to find out interesting things about learning and teaching. i.e. Col is aiming to develop inidcators that might be of use to the participants associated with e-learning – management, support staff, academics, students etc.

This post is an attempt to live up to some of the ideas of Jon Udell that I discussed in this post about getting half-baked ideas out there. Col and I have talked a bit today and I’ve also regained some prior thoughts that I’m putting down so we don’t forget it.

The major benefit of getting these half-baked ideas out there are your thoughts. What do you think? Want to share your half-baked ideas?

The fundamental problem

How do you identify/generate useful indicators that might be harnessed to act as weak signal detectors? How can we use all of this data about e-learning to generate something useful?

Disclaimer

It is fully understood that drawing simply upon usage data and other electronic data can never tell you the full story about a student’s learning experience or the quality of the teaching. At best it can indicate that something might be there, in almost all cases further investigation would be required to be certain.

For example, lots of discussion on a course discussion forum with lots of people responding to each other might be indicative of a good learning experience. It might be indicative of an out of control flame war.

However, knowing a little bit more about what’s going on and applying it sensible will helpfully be of some use.

The following are propositions about what might be interesting indicators. These need to be tested, both quantatively and qualitatively.

Content correlations?

It’s fairly widely accepted that most CMSes are generally used primarily as content repositories. Academics put lots of documents and other content into them for students to access. In some cases the ability of the CMS to act in some other purpose (e.g. to encourage discussion and collaboration) is significantly limited by the quality and variety of the tools they provide for these services and also some of the fundamental assumptions built into the CMSes (e.g. you have to login to acces the services).

If content is the primary use, is there anything useful that can be gained from. What I can think of includes:

  • If there is no or little information then this is bad.
    If the course site doesn’t contain anything, that’s probably a sign of someone who is not engaging with teaching. Courses with little or no content could be a negative indicator.
  • If the information is structured well, then it is good.
    Poor structure again may indicate some less than knowledgable. Almost all CMSes use a hierarchical structure for information. If all the content is located within 1 of 7 parts of the hierarchy, things may not be good.
  • If the content is heavily used, then this might imply usefulness.
    If students are using content heavily and that heavy use is consistent across most content this might indicate well designed content, which might be a good thing.
  • If the content is primarily the product of publishers, then this might be bad.
    A course that relies almost entirely for content from a textbook publisher might suggest an experience that is not customised to the local context. It might suggest an academic taking the easy way out. Which might indicate a less than positive outcome.
  • A large average # of hits on course content per student, might be positive indicator.
    If, on average, all of the students use the course content more, this may indicate more appropriate/useful material which might indicate a good learning experience.

Looking for particularly strong courses (see images below) might lead to the following being of interest

  • Percentage of total content per course.
    See images below. Essentially, courses with a greater percentage might be better.
  • Percentage of total requests.

Percentage of staff using the system

A simple one, the greater the percentage of the employed teaching staff using the system, the better.

An example

The following images illustrate how this was used in this presentation to compare and contrast usage of Webfuse after a period using the wrong development process and then after a period of using a better development process (remember the disclaimer above).

Results of “bad” process

Usage of an LMS - a measure (1 of 4)

Usage of an LMS - a measure (2 of 4)

Usage of an LMS - a measure (3 of 4)

Usage of an LMS - a measure (4 of 4)

Results of “good” process

Usage of an LMS - staff adoption (1 of 3)

Usage of an LMS - staff adoption (2 of 3)

Usage of an LMS - staff adoption (3 of 3)

Getting half-baked ideas out there: improving research and the academy

In a previous post examining one reason folk don’t take to e-learning I included the following quote from a book by Carolyn Marvin

the introduction of new media is a special historical occasion when patterns anchored in older media that have provided the stable currency for social exchange are reexamined, challenged, and defended.

In that previous post I applied this idea to e-learning. In this post I’d like to apply this idea to academic research.

Half-baked ideas

In this post Jon Udell talks about the dissonance between the nature of blogs, the narrative form he recommends for blogs and the practices of academics. In it he quotes an academic’s response to his ideas for writing blogs as

I wouldn’t want to publish a half-baked idea.

Jon closes the blog post with the following paragraph

That outcome left me wondering again about the tradeoffs between academia’s longer cycles and the blogosphere’s shorter ones. Granting that these are complementary modes, does blogging exemplify agile methods — advance in small increments, test continuously, release early and often — that academia could use more of? That’s my half-baked thought for today.

I think this perspective sums it up nicely. The patterns of use around the old/current media for academic research (conference and journal papers) are similar to heavyweight software development methodologies. They rely on a lot of up-front analysis and design to ensure that the solution is 100% okay. While the patterns of use of the blogosphere is very much more like that of agile development methods. Small changes, get it working, get it out and learn from that experience to inform the next small change.

Update: This post talks a bit more about Udell’s views in light of a talk he gave at an EDUCAUSE conference. There is a podcast of the presentation.

There are many other examples of this, just two include:

Essentially the standard practices associated with research projects in academia prevent many folk from engaging in getting the “half-baked ideas” out into the blogosphere. There are a number of reasons, but most come back to not looking like a fool. I’ve seen this many times with my colleagues wanting to spend vast amounts of time completing a blog post.

As a strong proponent and promoter of ateleological design processes, I’m interested in how this could be incorporated into research. Yesterday, in discussions with a colleague, I think we decided to give it a go.

What we’re doing and what is the problem?

For varying reasons, Col and I are involved, in different ways, with a project going under the title of the indicators project.. However, at the core of our interest is the question

How do you data mine/evaluate usage statistics from the logs and databases of a learning management system to draw useful conclusions about student learning, or the success or otherwise of these systems.

This is not a new set of questions. The data mining of such logs is quite a common practice and has a collection of approaches and publications. So, the questions for use become:

  • How can we contribute or do something different than what already exists?
  • How can we ensure that what we do is interesting and correct?
  • How do we effectively identify the limitations and holes underpinning existing work and our own work?

The traditional approach would be for us (or at least Col) to go away, read all the literature, do a lot of thinking and come up with some ideas that are tested. The drawback of this approach is that there is limited input from other people with different perspectives. A few friends and colleagues of Col’s might get involved during the process, however, most of the feedback comes at the end when he’s published (or trying to publish) the work.

This might be too late. Is there a way to get more feedback earlier? To implement Udell’s idea of release early and release often?

Safe-fail probes as a basis for research

The nature of the indicators project is that there will be a lot of exploration to see if there are interesting metrics/analyses that can be done on the logs to establish useful KPIs, measurements etc. Some will work, some won’t and some will be fundamentally flawed from a statistical, learning or some other perspective.

So rather than do all this “internally” I suggested to Col that we blog any and all of the indicators we try and then encourage a broad array of folk to examine and discuss what was found. Hopefully generate some input that will take the project in new and interesting directions.

Col’s already started this process with the latest post on his blog.

In thinking about this I can come up with at least two major problems to overcome:

  • How to encourage a sufficient number and diversity of people to read the blog posts and contribute?
    People are busy. Especially where we are. My initial suggestion is that it would be best if the people commenting on these posts included expertise in: statistics; instructional design (or associated areas); a couple of “coal-face” academics of varying backgrounds, approaches and disciplines; a senior manager or two; and some other researchers within this area. Not an easy group to get together!
  • How to enable that diversity of folk to understand what we’re doing and for us to understand what they’re getting at?
    By its nature this type of work draws on a range of different expertise. Each expert will bring a different set of perspectives and will typically assume everyone is aware of them. We won’t be. How do you keep all this at a level that everyone can effectively share their perspectives?

    For example, I’m not sure I fully understand all of the details of the couple of metrics Col has talked about in his recent post. This makes it very difficult to comment on the metrics and re-create them.

Overcoming these problems, in itself, is probably a worthwhile activity. It could establish a broader network of contacts that may prove useful in the longer term. It would also require that the people sharing perspectives on the indicators would gain experience in crafting their writing in a way that maximises understandability by others.

If we’re able to overcome these two problems it should produce a lot of discussion and ideas that contributes to new approaches to this type of work and also to publications.

Questions

Outstanding questions include:

  • What are the potential drawbacks of this idea?
    The main fear I guess of folk is that someone, not directly involved in the discussion, steals the ideas and publishes them unattributed and before we can publish. There’s probably a chance that we’ll also look like fools.
  • How do you attribute ideas and handle authorship of publications?
    If a bunch of folk contribute good ideas which we incorporate and then publish, should they be co-authors, simply referenced appropriately, or something else? Should it be a case by case basis with a lot of up-front discussion?
  • How should it be done?
    Should we simply post to our blogs and invite people to participate and comment on the blogs? Should we make use of some of the ideas Col has identified around learning networks? For example, agree on common tags for blog posts and del.icio.us etc. Provide a central point to bring all this together?

References

Lucas Introna. (1996) Notes on ateleological information systems development, Information Technology & People. 9(4): 20-39

Some possible reasons why comparison of information systems are broken

All over the place there are people in organisations performing evaluations and comparisons of competing information systems products with a strong belief that they are being rational and objective. Since the late 1990s or so, most Universities seem to be doing this every 5 or so years around learning management systems. The problem is that these processes are never rational or objective because the nature of human beings is such that they never can be (perhaps very rarely – e.g. when I’m doing it 😉 ).

Quoting Dave Snowden

Humans do not make rational, logical decisions based on information input, instead they pattern match with either own experience, or collective experience expressed as stories. It isn’t even a bit fit pattern match, but a first fit pattern match. The human brain is also subject to habituation, things that we do frequently create habitual patterns which both enable rapid decision making, but also entrain behaviour in such a manner that we literally do not see things that fail to match the patterns of our expectations”.

Dave also makes the claim that all the logical process, evaluations, documents and meetings we wrap around our pattern-matching decisions is an act of rationalisation. We need to appear to be rational so we dress it up. He equates the value of this “dressing up” with that of the ancient witch doctor claiming some insight from the spirit world leading him to the answer.

Via a Luke Daley tweet I came across a TED talk by Dan Gilbert that provides some evidence from psychology about why this is true. You can see it below or go to the TED page

As an aside the TED talks provide access to a lot of great presentations and even better they are freely available and can be downloaded. Putting them on my Apple TV is a great way to avoid bad television.

The model underpinning blackboard and how ACCT19064 uses it

As proposed earlier this post is the first step in attempting to evaluate the differences between three learning management systems. This post attempts to understand and describe the model underpinning Blackboard version 6.3.

Hierarchical

Blackboard like most web-based systems of a certain vintage (mid-1990s to early/mid 2000s) tend to structure websites as a hierarchical collection of documents and folders (files and directories for those of us from the pre-desktop metaphor based interface days). This approach has its source in a number of places, but most directly it comes from computer file systems

Webopedia has a half way decent page explaining the concept. The mathematicians amongst you could talk on in great detail about the plusses and minuses of this approach over other structures.

In it’s simplest form a hierarchical structure starts with

  • A single root document or node.
    Underneath this will be a sequence of other collections/folders/drawers.

    • Like this one
    • And yet another one
      Each of these collections can in turn contain other collections.

      • Like this sub-collection.
      • And this one.
        This hierarchical structure can continue for quite some time. Getting deeper and deeper as you go.
    • And probably another one.
      Best practice guidlines are that each collection should never contain much more the 7+/-2 elements as this is a known limitation of short term memory.

Blackboard’s idea of hierachy

One of the problems with Blackboard is that it’s underpinning models don’t always match what people assume from their what they see in the interface. This applies somewhat to the hierarchical model underpinning Blackboard.

Normally in a hierarchical structure there is one root document or node that is at the “top” of the pyramid of content. What Blackboard does is that each course site has a collection of content areas and then you nominate one of those as the “home” page. i.e. the one that appears when people first login. It’s not really the top of the pyramid.

Let’s get to an example, the image is the home page for the ACCT19064 course.

ACCT19064 home page

Note: I currently have “admin” access on this installation of Blackboard. Some of what appears in the interface is based on that access and is not normally seen by student or other staff users.

The links in the course menu on the left hand side of the image are (mostly) the top levels of the hierarchical structure of the Blackboard course. There are 13 main areas

  • Machinimas
  • Hird & Co
  • Feedback
  • Notice Board
  • Discussion Board
  • Group Space
  • Resources Room
  • Assessment Room
  • Instructor Resources
  • Teaching Team
  • External Links
  • Library
  • Helpdesk

The “home page” in the large right hand frame, which would be at the top of the hierarchy if Blackboard followed this practice, is the Announcements page. The link to the announcements page in ACCT19064 is provided by the “Notice board” course menu link.

The other complicating factor is that the course menu links for Helpdesk and Library aren’t really part of the Blackboard course site. They are links to other resources.

Feature: Blackboard allows top level folders to be links to external resources and also ad hoc elements within the course site.

The bit above is a new strategy. Everytime I come to something that I think is somewhat strange/unique or a feature of Blackboard I am going to record it in the text and also on this page.

Feature: The course home page can be set to a selection of “pages” within the site.

Other bits that don’t fit

Underneath the course menu links there are a couple of panels. The content of some of these can be controlled by the coordinator. In the example above the designer has removed as many of the links on these panels as possible. Normally, there would be two links

  • Communication; and
    Links to a page of the default communication tools Blackboard provides each course including: announcements, collaboration, discussion board, group pages, email and class roster.
  • Course tools
    Links to a page of the default course tools (not communication tools) Blackboard provides including: address book, calendar, journal, dictionary, dropbox, glossary….. This list can be supplemented.

The links to these tools are not part of the hierarchical structure of the course. They are always there, though the designer can remove the links. Confusingly, most staff leave these links and so students waste time checking the tools out, even if they aren’t used in the course (and most aren’t).

Feature: Blackboard does not maintain the hierarchy metaphor at all well. Confuses it with “tools” which sit outside the hierarchy.

The course map feature

To really reinforce the hierarchical nature of a Blackboard course site, Blackboard provides a course map feature which provides a very familiar “Windows explorer” link representation of the structure of a course website. The following image is of a part of the course map for the ACCT19064 course site.

Blackboard course map for the ACCT19064 site

What’s do the course menu links point to?

The links in the course menu can point to the following items

  • Content area
    This is the default content holder in Blackboard. If the designers wants to create a collection of content (HTML, uploaded files, tools etc.) they create a content area. More on these below.
  • Tool link
    This is a link to one of the communication or course tools mentioned above.
  • Course link
    A link to some other page/item within the course, usually within a content area.
  • External link
    A URL to some external resource.

The course menu link for ACCT19064 points the following

  • Machinimas – content area with 5 elements
  • Hird & Co – content area with 3 elements
  • Feedback – content area with 3 elements
  • Notice Board – link to the announcements tool
  • Discussion Board – direct link to the course level discussion conference
  • Group Space – a content area with 5 elements
  • Resources Room – a content area with 15 or so elements
  • Assessment Room – a content area with 6 elements
  • Instructor Resources – a content area with 1 element (see below)
  • Teaching Team – a link to the “Staff Information” tools
  • External Links – a content area with a number of links
  • Library – a direct link to the Library website.
  • Helpdesk – a mailto: link for the helpdesk

The number of elements I mention in each content area might be wrong. Blackboard supports the controlled release of content in a content area. Some people may not be able to see all of the content in a content area – explained more in the “Controlling Access” section below.

What’s in a content area?

A content area consists of a number of items. The items are displayed one under the other. The following image is of the Assessment Room in the ACCT19064 course site. It has 6 items. Not the alternating background colour to identify different items.

ACCT19064 assessment room

The edit view link in the top right hand corner appears when the user has permission to edit the content area. This is how you add, modify or remove an item from the content area.

An item in a content area can be one of the following

  • A “content” item
    i.e. something that contains some text, perhaps links to an uploaded file.
  • A folder
    This is how you get the hierarchical structure. A folder creates another level in the hierarchy within/underneath the current content area. This folder contains another content area.
  • An external link.
  • A course link.
  • A link to various tools or services.
    e.g. to tests or a range of different services and tools provided by Blackboard and its add ons.

Each item is associated with a particular “icon”. A folder item will have a small folder icon. A content item will have a simple document.

Feature: The icon associated with each item cannot easily be changed, especially for an individual course. It can also not be made invisible (easily) and causes problems for designers.

For example, the following image is from what was intended to be the home page for a Blackboard course. A nice image and text ruined by the document icon in the top left hand corner.

Controlling access

Blackboard provides a facility to limit who can see and access items within a content area and also what links can be seen in the course menu. However, it’s done consistently.

Feature: Different approaches with different functionality is available to restrict access/viewing of the course menu links (very simple) and individual items in content areas (very complex and featured). Restrictions on discussion forums also appear some what limited.

The following image is of the “Instructor Resources” content area of the ACCT19064 course site. It is being viewed as a user who is not a member of the staff for this course. Actually not a member of the blackboard group “Teaching Staff”.

ACCT19064 - Instructor Resources - not staff

What follows is the same page with the same content area. However, it is now viewed as a user who is a member of the “Teaching staff” group.

ACCT19064 Instructors Resources - as staff

Access to items can be restricted in the following ways

  • Visible or not
    A simple switch which says everyone can see it, or they can’t.
  • Date based ranges
    Specify a date/time range in which it is visible.
  • Group based membership
    You can see it if you are part of the right group or in a specified list of users.
  • Assessment related
    You can only see if if you have attempted a piece of assessment or achieved a grade within a specific range.

The specification of rules to restrict access can be combined.

Feature: Access to items can be restricted based on simple on/off, date, group membership, assessment.

A description of the ACCT19064 site

At initial look the course site is designed as a container for the content and tools used within the course. The design of the course site itself does not inherently provide any guidance to what the students are meant to do. i.e. there is no study schedule or similar showing up in the course menu links.

However, looking at the announcements for the course. It appears that this type of guidance and support for the students is given by an announcement from the course coordinator on the Sunday at the start of each week. This announcement is very specific. It outlines the individual and team-based tasks which the on-campus and off-campus students are meant to complete. There is also some additional comments, sometimes errata and sometimes the odd big of advice.

Interestingly, these guidance announcements didn’t link students directly to the tasks.

Breaking down the content of the site

The following describes in more detail the content within each of the course menu links, at least those that point to content areas.

  • Machinimas – content area with 5 elements. There is no adaptive release.
    • Description of the machinimas and their purpose.
    • Four external links to web pages that contain video of the machinima.

    Feature: Blackboard uses breadcrumbs for navigation. Including external web pages can be made more transparent if the breadcrumbs can be re-created on those external pages.

    The machinima pages, with the video playing, look like the following image

    ACCT19064 machinima page

  • Hird & Co – content area with 3 elements. No adaptive release
    This is meant to represent the imaginary audit company used throughout the course

    • External link to a “intranet” site for an imaginary audit company.
    • External link to an external discussion forum used by AIC campuses for discussion about questions prior to face-to-face classes.
    • A link to a Blackboard discussion forum used by the Audit Partner (the coordinator)
  • Feedback – content area with 3 elements
    Actually it’s 4 elements. One is not visible due to adaptive release. The missing element is a course experience survey. This section is entirely set aside to getting feedback from students about the course. I’m guessing it was added late in the term.

    • Content item linking to an information sheet
    • Content item linking to a consent form
    • A link to survey tool for the actual survey
    • An external link with help on an issue with the survey.
  • Notice Board – link to the announcements tool
  • Discussion Board – direct link to the course level discussion conference
  • Group Space – a content area with 5 elements
    • Link to a folder for the “Group Space for Audit Teams” – content area with 3 elements
      • Link to the groups page for the course
        This is one of the Blackboard communication tools that is supported by a group allocation/management system. It allows each group to have a collection of pages/tools which are unique and only accessible by members of the group.

        Typically this includes a group discussion conference, collaboration, file exchange and email.

      • Content item containing an announcement about groups
      • Content item containing details of group allocation – group names and student members
    • Content item linking to a document explaining problems faced by Vista users
    • Link to the Blackboard drop box tool
    • Folder containing feedback for submitted tasks
    • Folder containing declarations for each quiz
  • Resources Room – a content area with 15 or so elements
    Apart from a content item describing solutions to problems for users of Vista, this content area consists entirely of folders used to group resources associated with a particular week or activity. There are

    • 12 folders for each week of term
      These are a collection of folders and content items providing access to various weekly materials such as: eLectures, powerpoint lecture slides, activity sheets as Word docs, weekly quizzes and solutions (available only after a specified time).
    • one containing feedback and facts from previous students,
      A simple collection of content items with pass results and qualitative student feedback.
    • one containing general course materials (e.g. course profile and study guide),
      Two external links to the course profile and study guide.
    • one on auditing standards
      Two external links to the auditing standards applicable to the course.
    • One containing a course revision presentation
  • Assessment Room – a content area with 6 elements
    • Team membership – content item links to a word doc that students must complete and return
    • Personal journal – link to Blackboard journal tool that is used for personal reflection and integrated into assessment and weekly activities.
    • Resources for assessment items 1, 2a and 2b
      Each contains a range of content items, folders and external links pointing to resources specific to each assessement item.
    • Exam information – collection of information about the exam
  • Instructor Resources – a content area with 7 elements, a number under adaptive release
    • Staff discussion forum – external link to an external discussion forum
    • Snapshots for teaching team – collection of word documents explainining activities/tasks for the various weeks.
    • Teaching materials for lectures – collection of materials for staff to give lectures
    • Teaching materials for the tutorials – ditto for tutorials
    • Teaching materials for the workshops – ditto for workshops
    • Information on Assessment item 2a – misc additional background on assessment item
    • Information on assessment item 2b
  • Teaching Team – a link to the “Staff Information” tools
  • External Links – a content area with a number of links
  • Library – a direct link to the Library website.
  • Helpdesk – a mailto: link for the helpdesk

Overview

A fairly traditional hierarchical design for a course website. Students receive direction on what tasks to do from weekly announcements, not from some fixed “schedule” page.

Heavy use is made of groups.

There are some significant differences between tasks/activities for on-campus versus off-campus students. While educationally appropriate this does tend to make things more difficult for the coordinator and students. i.e. there has to be two sets of instructions created by the coordinator and students have to discern which they should follow.

Some use of external discussion forums. Probably due to the limitation in how Blackboard allows discussion forums to be configured. i.e. one discussion conference per course and one discussion conference per group.

Staff information not integrated with CQU systems, require duplication of effort. Same applies for the provision of links to course profile and to some extent lectures.

Evaluating an LMS by understanding the underpinning "model"

Currently, CQUni is undertaking an evaluation of Sakai and Moodle as a replacement for Blackboard as the organisation’s Learning Management System. The evaluation process includes many of the standard activities including

  • Developing a long list of criteria/requirements and comparing each LMS against that criteria.
  • Getting groups of staff and students together to examine/port courses to each system and compare/contrast.

Personally, I feel that while both approaches are worthwhile they fail to be sufficient to provide the organisation with enough detail to inform the decision. The main limitation is that neither approach tends to develop a deep understanding of the affordances and limitations of the systems. They always lead to the situation that after a few months/years of use people can sit back and say, “We didn’t know it would do X”.

A few months, at least, of using these systems in real life courses would provide better insight but is also prohibitively expensive. This post is the start of an attempt to try another approach, which might improve a bit on the existing approaches.

What is the approach?

The approach is based somewhat on some previous ramblings and is based on the assumption that an LMS is a piece of information technology. Consequently, it has a set of data structures, algorithms and interfaces that either make it hard or easy to perform tasks. The idea is that if you engage with and understand the model, you can get some idea about what tasks are hard or difficult.

Now there is an awful lot of distance between saying that and actually doing it. I’m claiming that the following posts are going to achieve anywhere near what is possible to make this work effectively. My existing current context doesn’t allow it.

At best this approach is going to start developing some ideas of what needs to be done and which I didn’t do. Hopefully it might “light” the way, a bit.

Using the concept elsewhere

We’ve actually been working on this approach as a basis for staff development in using an LMS. Based on the assumption that understanding the basics of the model will make things work somewhat easier for folk to use. The first attempt at this is the slidecast prepared by Col Beer and shown below.

Blackboard@CQ Uni

View SlideShare presentation or Upload your own.

What will be done?

Given time constraints I can only work with a single course, from a single designer. More courses, especially those that are different would be better. But I have to live with one. I’ve tried to choose one that is likely to test a broader range of features of the LMSes to minise this. But the approach is still inherently limited by this small sample set.

The chosen course is the T2, 2008 offering of ACCT19064, Auditing and Professional Practice. For 2008 this course underwent a complete re-design driven by two talented and committed members of staff – Nona Muldoon (an instructional designer) and Jenny Kofoed (an accounting academic). As part of this re-design they made use of machinima produced in Second Life. The re-design was found to be particularly successful and has been written about.

The basic steps will be

  1. Explain the model underpinning Blackboard and how it is used within the course.
  2. Seek to understand and explain the model underpinning Moodle and then Sakai.
  3. Identify and differences between the models and how that might impact course design.

Hopefully, all things being equal, you should see a list of posts describing these steps linked below.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php