Adding more student information to a Moodle course

moreStudentDetails.user.js is a Greasemonkey script I’ve written to provide more details about a student when I’m using Moodle. Originally intended to help when responding to a student query in a course I teach that regularly has 300+ pre-service teachers from a range of backgrounds and locations. The current version produces something like the following image (click on it to see a larger version).

MAV-based more user details

The script adds a link titled [details] to the Moodle page whenever it finds a link to a user profile (see above). When you click on that link a small dialog box pops up with some more student details. For my purposes, I’m particularly interested in what type of pre-service teacher they are and their mode/campus.

This script uses much the same technology as the gradebook fix mentioned in this post and @damoclarky’s Moodle Activity Viewer. The work on these scripts is part of an on-going project to identify some theories/principles that can be used to enhance institutional e-learning (see this paper for early development of these ideas).

The rest of this post is divided into two parts

  1. Recent developments – documents thinking about how to transform this simple script into something that provides more useful and specific process analytics (see this post for a definition of process analytics). Also documents early attempts to share this script via github.
  2. Initial development – a development diary of early steps in developing this script.

Recent developments

Sharing via github

Have just created the BAD repository on github. It currently hosts two scripts

  1. gradebookFix.user.js – briefly mentioned in this post this script modifies the Peoplesoft gradebook to highlight special cases.
  2. moreStudentDetails.user.js – the script described here. Only the client script, not the server at the moment.

Much of this code is still quite ugly and probably not at all useful by others (though the gradebookFix.user.js should be useful by any course examiner from USQ).

Creating the repository at the moment is more about having the scripts under source control, stored off my laptop and to start playing with the process and mechanisms of sharing these types of scripts.

The name “BAD” is based on the BAD (Bricolage/Affordances/Distributed) mindset formulated in the paper.

Extending it to include process analytics

Lockyer et al (2013) define process analytics as analytics that “provide direct insight into learner information processing and application” (p. 1448). i.e. analysis and representations that provide some additional detail about how the learning is progressing. I’m keen to add more of this to the “more student details” script. The following explains what I’d like to add and some reflection about how this might be best done with the technologies available.

As it happens, @Edu_K has just commented on a post and described nicely what I’m trying to achieve

I like your idea of in-built LA functions into the existing tools. This can help their use to adjust teaching “on-the-go” in response to needs of the particular cohort – which is one of the most important abilities of a good educator

The plan

I’m looking to add two additional groups of information about students specific to this course to the dialog box

  1. Activity completion; and,
    Each week of the course has a learning path of set activities. Students get some marks for completing these activities and Moodle’s activity completion functionality is used to track their work. Having a usable summary of each student’s activity completion available in this dialog would help understand where they are up to in the learning path.
  2. Blog post activity.
    The course requires the students to create and post to their own external blog. The BIM Moodle module is used to mirror blog posts and help award marks to students based on # of posts, word count etc. Adding a summary of the student’s blog posts, related statistics and perhaps other analytics (e.g. emotion etc) could also be useful.

The mockup

This will probably involve some fairly advanced jQuery work – something I’m new to – hence the need to start with a mockup. Once the design is sort of working I’ll post this and a subsequent post will pick up the coding.

The initial mockup (ugly colour scheme and all) can be seen in the following image. Or you can actually play with the mockup here.

moreStudentDetails

What the mockup shows in the above is the visual representation of the activities the student has completed (or not), some explanation

  • There are 3 modules.
    Each module in the above is coloured from green (most/all complete) through yellow (a fair bit complete) down to red (not much complete). Initially you can only see the summary of the module completion. But you can drill down.
  • Each module has 3 or 4 weeks.
    The above shows Module 1 expanded to its three weeks. Each of the weeks are also colour coded based on the weekly activities that have been completed.
  • Each week has a number of activities.
    The above shows Week 2 expanded to show its 5 activities. 2 are completed and are in green. 3 aren’t. The completed activities include the date/time when they were completed and also the week of semester in which that date occurs. The real version would have those activity names as links to the actual activity.

Initial development

The following is a description of long gestating approach to solving a problem I have when teaching. i.e. knowing a bit more about my students when I’m replying to a query on a discussion forum in a Moodle course. It describes a modification to the Moodle Activity Viewer (MAV) to solve this problem.

What I did

  1. Fork a new version of the MAV code.
  2. CLIENT: Get MAV running only on my course.
  3. Figure out how it will all work
  4. CLIENT: Get the data to send to the server (user ids) on the current page.
  5. CLIENT: Send that information to the server.
  6. CLIENT: Figure out the popup.
  7. SERVER: Return a collection of HTML to the client.
  8. CLIENT: Add a popup to the moodle page for each user link.

    Yep Damo and Rolley, going with the kludge first up.

Add a new link for people to click on and use that

This does it.

[code lang=”javascript”]
$(element).after(‘<a id="user_"’+userID+’" class="makealink"><small>&nbsp;more&nbsp;</small></a>’ );
[/code]

But the problem is that there can be multiple such links (e.g. one around the image on a forum). May not want to add a link on all. Plus there are some other issues with passing values. Here’s what works now.

[code lang=”javascript”]
$(element).after(‘<a data="’+userID+’" class="showMoreUserDetailsLink"><small>&nbsp;[details]&nbsp;</small></a>’ );
$(".showMoreUserDetailsLink").click(
function() {
var id = $(this).attr("data");
getUserDetails(id);
}
);
[/code]

OUTSTANDING: Still have limit the situations where this is added.

Get some data from the server

  1. Create an empty server that returns nothing.
    [code lang=”php”]
    $html = "<h3>Getting data from the server</h3>" ;

    header(‘Content-Type: application/json’);

    echo json_encode($html) ;

    if(getenv(‘DEBUG’))
    error_log(‘html=’.json_encode($html)) ;
    [/code]

  2. Update the client to query the server.

    Copied an existing method. Passes the user id and displays information back from the server. Pared back the message length and its working well.

  3. Create the database tables for users for the MAV server.

    Main issue here is that I’m dealing with two separate Moodle databases with different user ids. Two steps required here on my local Moodle database:

    1. Create a table to map between ids.

      Need to extract list of user ids from the institution, match with local and stick in database.

      The enrolled users report and some regular expression magic in vi etc gets me a list of ID and name in a text file.

      Rather than create a new table, adding a column to the mdl_user table on the local server “usqMoodleId” is the kludge”.

    2. Create the table(s) required to store the additional information.
  4. Have the server extract and return real data.

* Modify the server to return specific data for each user
* Map the ids from study desk to my database
* Only add the [details] link for specific links and only for links associated with this course?

Fork a new version

This is a kludge. Not making this pretty so a new directory and start from scratch.

Only run on my course

I’ve the method balmi.getCoursePageLink returns NULL, MAV doesn’t work.

I’ve modified this to return NULL if the Moodle course ID for the page doesn’t match the ID for my current course. Obviously this would need to be more general in the future.

How will it work

Basic plan is

  • Update the initial Moodle page: detect any links to user/view.php and bind a hover event on that link to function.
  • That function will pass the user id to the MAV server, get some HTML back and generate a dialog box.
    • Get the dialog box working

      First test is to modify the links and get the dialog box appearing without any interaction with the server.

      Get the data to send to the server

      The idea is that MAV will extract the Moodle user ids that it finds in the current page. If there aren’t any, then nothing to do. If there is some, it has to bundle those up and send them to the MAV server to get additional data about the user. To do that I have recognise the user profiles and then extract the URL.

      User profile links are typically of the following form

      moodle URL/user/view.php?id=userid&course=courseid

      That should be fairly easy to recognise and the existing balmi.getMoodleLinks should serve as a template

      Change the name to getMoodleUserLinks and fiddle with the regular expressions to focus on the user links. That’s working.

      Some stuffing around to extract the user id thanks to limited knowledge of Javascript.

      As it stands with just these changes, the client is sending the following JSON to the server

      [code lang=”javascript”]
      { "mavVersion":"0.5.4",
      "settings":{"activityType":"C","displayMode":"C","groups":[0]},
      "courselink":"http://usqstudydesk.usq.edu.au/m2/course/view.php?id=4688",
      "links":{"1093":"1093","18474":"18474","6622":"6622"}
      }[/code]

      In a proper development I’d actually change all this, but I need to get this working. Actually I will change it slightly.

      Client/Server

      Modify the request so that it’s going to the right server.

      New server (API) getUserProfile.php

      Figure out the popup

      This is the bit that will stretch my non-existent JQuery skills. How to modify the Moodle page to add the dialog/popup I want for each bit of user data passed back from the server?

      Apparently, I’ll be using the JQuery dialog widget and apparently the getStudentAccess method is a useful template. Of course that through me a bit until I assumed to use it as a model to modify the requestData method from the original MAV that I’m kludging.

Established versus Affordances: part of the reason institutional e-learning is like teenage sex

The following is an attempt to expand and connect some thoughts from a soon to be presented ASCILITE’2014 paper titled Breaking BAD to bridge the the reality/rhetoric chasm” (this link will eventually have a range of additional resources). The expand part of this post is me trying to experiment with some approaches of explaining what we’re trying to get at. Hopefully with the aim of being convincing. The “connect” part of this post aims to connect with some of the discussion about the LMS that has gone on recently.

Is elearning like teenage sex?

Some context

The paper draws on our experience in an attempt to identify some reasons why institutional e-learning is like teenage sex (you’ll soon see why I’m reluctant to just say bad). In doing so, we’re proposing two mindsets we’ve seen/used that underpin institutional e-learning. They are

  1. the SET mindset – Strategic, Established, and Tree-like.
    The most common (and the cause of all the problems, we think).

  2. the BAD mindset – Bricolage, Affordances, and Distribution.
    Which better describes the mindset we use in our own practice.

Even though these two mindset are incommensurable, we think that institutional e-learning is to SET in it’s thinking and needs to break BAD a little (maybe a lot) more often. This table summarises and compares the two mindsets.

This post – at the title suggests – is going to look at the 2nd of the three components of these mindsets. i.e. How is Information and Communication Technologies (mostly software) (ICT) perceived?

The SET mindset sees ICT as Established. With this mindset, ICTs are a hard technology and cannot be changed. Instead, people and their practices must be modified to fit the fixed functionality of the technology.

The BAD mindset sees ICT as having Affordances for change. ICT is a soft technology that can and should be modified to meet the needs of its users, their context, and what they would like to achieve.

The paper/presentation seeks to illustrate these mindsets and their components by drawing on challenges that @damoclarky (co-author) and I have faced and how we’ve worked around them. The point is that these examples are indicative of broader problems. These won’t be the only examples of such problems, there will be many more. The paper uses two other much larger examples from our experience.

The point is that the SET mindset underpinning most institutional e-learning makes it difficult, if not impossible, to be aware of let alone respond to these problems.

A practical example – Peoplesoft gradebook

My current institution (and my prior institution) had the misfortune to choose to implement the Peoplesoft ERP at the turn of the century. Many of millions of dollars later we are lumbered with using this conglomeration of tools for many tasks, including the processing of final results and grades at the end of semester. A task that is about to start.

In my situation, all of the results for students are entered into a local online assignment management system that is also used for the submission, marking etc of student assignments. Once a student’s assignment is marked, moderated and returned to the student, their mark for that assignment is placed into the Peoplesoft gradebook. Once the final assignment is moderated and returned I can view the Peoplesoft gradebook and see something like the following. Listing all the students, their names/ids (blurred here), the final result and the grade awarded. (Click on the images to see larger versions)

gradebook

Now the Peoplesoft gradebook has been configured to do something intelligent. It will automatically calculate the students grade based on the result. And it is mostly, but NOT always, correct. You see there are special cases that have not been programmed into the gradebook, including:

  1. rounding up;
    If a student is within 0.5% of a grade borderline (e.g. 84.6%) then they should be upgraded to the next grade.
  2. supplementary;
    If a student is within 5% of a fail mark, then they should be given the appropriate form of supplementary grade.
  3. missing compulsory result;
    If a student has not yet received a mark for a compulsory assignment, then they should be given a “result outstanding” grade.
  4. fail not completed; and,
    If a student has failed a course, but hasn’t completed all of the assessment, then they should get a “fail not completed” grade.
  5. fail not passed;
    If a student hasn’t completed any of the assessment items, then they should get a “fail not pass” grade.

Since none of these special cases are handled by Peoplesoft, the course examiner must do it manually. The course examiner is expected to scroll through the gradebook looking for students who fall into these categories. When identified, the examiner then changes the grade to the appropriate value and may be required to add a note explaining the change.

The course I teach in first semester has 300+ students spread over four different modes. This means I have to scroll through four difference lists (one list per mode), some with 100s of students manually searching for special cases. As shown in the image above, the gradebook page does not show any other information such as whether the student as submitted all assessment items. Or in my case, whether a result for the Professional Experience item (managed by another section) has been received. Not only do I have to manually look through a web page of a 100+ students. I have to be manually checking other data sources in order to make the change.

And my course is by no means the largest course.

To help with this process, various actors within the institution will every semester or so generate sets of instructions about how to use the Peoplesoft gradebook and reminding people of the special cases. Every semester a several other staff members have to run checks on the changes made (or not) by the examiners to identify potential problems.

Why is this a problem?

Human beings are very bad at repetitive, low-level tasks like this. Computers are very good at it. In this case, the Peoplesoft system is only doing part of the necessary job.

This isn’t a particular learner focused example, but the suggestion here is that these types of problem a littered throughout the learner/teacher experience of institutional e-learning. This particular problem is interesting because I’ve just solved it.

Why can’t it be solved?

But I’m not holding my breath for the institution to solve this problem. My argument is that the SET mindset that underpins how it does business will significantly limit its capability to recognise this problem, let alone fix it.

In part, this is because organisations are increasingly seeing ICT as established. i.e. it’s not something they can change. ICT – especially large complex enterprise systems like an ERP or an LMS – is something that should be implemented vanilla, as is. This is best practice advice in the research and practitioner literature. ICT is seen as very hard and expensive to change. Well illustrated by my favourite quote from an IT person participating in a LMS review process

we should seek to change people’s behavior because information technology systems are difficult to change (Sturgess and Nouwens, 2004, n.p)

Even though I detest Peoplesoft, I have to imagine that there is some mechanism within it to program these sorts of special cases into the gradebook. But the system owner of Peoplesoft is that student administration part of the institution. In a separate branch of the organisational tree (hierarchy) than the academics and other staff who are carrying the cost of doing this manually (this links to the 3rd component of SET/BAD).

But a problem here is also the assumption that it is the Peoplesoft gradebook that has to change. It seems obvious. The problem here is that the gradebook isn’t providing some functionality. Thus, the expensive ERP needs to be blamed and fixed. But ERPs are hard (i.e. expensive) to change and can only be changed by qualified people who are expensive and scarce. They can only be used for important – as defined by the system owner – work.

The net effect is that it is very difficult (if not possible) to change the gradebook. It is Established. It’s likely that this view is so fundamental, that the possibility of using ICT to fix this problem wouldn’t even be considered.

This perspective of ICT reminds me of this quote from Churchill.

Churchill on established mindset

The established mindset results in us and what we can (and can’t) do being shaped by our technologies.

How might the BAD mindset can solve the problem?

With a BAD mindset you might get a solution that generates the web page shown in the following image. In that image you can see that two of the rows (students) are now coloured differently. The change in colour represents one of the special cases mentioned above. You’ll also see that the coloured rows have some additional hints to remind the course examiner either: what they need to do; or, what they’ve already done.

gradebook2

This solution did not involve any changes to Peoplesoft. As established above, it is too hard to change and I don’t have the access or knowledge to change it. However, there are other possibilities. At some stage the gradebook generates a web page and sends it to my web browser on my computer. At this stage, I can get some level of affordance for change via Greasemonkey, a plugin for the Firefox browser. Greasemonkey allows you to write scripts (written in Javascript) that can manipulate the presentation and functionality of web pages.

Using Greasemonkey, I’ve been able to write a script that

  1. Recognises when a web page generated by the Peoplesoft gradebook appears;
  2. Looks through each of the students in the table looking for the special cases; and,
  3. When it finds one, the script changes the colour of the row and adds a hint about what should be done by the examiner.

The protean nature of ICT has been leveraged to increase the affordances offered to the user.

In theory, I can share this Greasemonkey script and it can be installed by anyone. Assuming that the default institutional operating system hasn’t been SET in stone so that you can’t install a Firefox plugin.

The Affordance mindset

The term affordance comes in many shapes, sizes and arguments. In this context, an affordance is seen as a relation between the abilities of individuals/organisations and the features of ICT. In the BAD mindset there are two important implications around the idea of affordance, they are

  1. ICTs generally have an affordance for being protean/mutable.
    Echoing Kay’s (1984) discussion of the “protean nature of the computer” (p. 59) as “the first metamedium, and as such has degrees of freedom and expression never before encountered”. But as affordances are relational, this affordance is only perceived by those who can “program”. Echoing Rushkoff’s (2010) sentiment that that digital technology is “biased toward those with the capacity to write code” (Rushkoff, 2010, p. 128). The problem is that most universities are increasingly getting rid of the people who have the ability to “write code” or more generally perceive ICT as protean.
    Since I can write code and have access to Greasemonkey. I can use Greasemonkey to manipulate the output of Peoplesoft to change in to a form of expression that is more appropriate.
  2. ICTs should be continually refined to maximise the affordances they offer.
    i.e. people shouldn’t have to reshape their hand to fit the tool. The tools they have should actively help and transform what is done. This process should tend to recognise the diversity inherent in people.

In short, the affordance mindset not only knows that ICT can be changed, it should be changed to suit the purposes of the people using it. Or to adapt the Churchill quote above to the affordance mindset.

Churchill modified - affordance of ICTs

This mindset is not new. People have always done it. There are in fact entire sections of literature devoted to talking about people doing this. For example, there’s literature on “shadow systems” in the information systems field that sees such practices as a threat/risk to organisational integrity, efficiency and apple pie. There’s a similar literature – mostly in computer science and management – under terms such as work-arounds, make-work and kludges (Koopman & Hoffman, 2003).

The increasingly digital nature of life and the increasing availability and functionality of enabling technologies (like Greasemonkey) are only making the affordance mindset more widely available. The problem is that not many organisations are yet to recognise it.

Implications

Our argument is that unless an institution can adopt more of an Affordance approach, rather than an Established approach to ICT, it’s unlikely to make any progress in bridging the chasm between the reality and rhetoric around e-learning. It’s e-learning will continue to be more like teenage sex.

However, we don’t want to stop there. One of the aims of this work is to try and understand how to bridge the chasm. To explore answers to how institutional e-learning can break BAD? What might that look like and what might be the impacts?

Changing any mindset is far from easy.

Can “scratch my itch” become “scratch our itch”?

The above example – like most current examples of the affordance mindset – arise from the work of an individual or small group scratching an itch particular to them. They are interested in solving their own problem with the tools they have to hand (bricolage which is the B in the BAD mindset and the topic of a latter post). A problem with this approach is how can that personal scratching of an itch become something that is usable by more (or perhaps most) people in an organisation?

One approach is for organisations to focus less on the assumption that central ICT staff are able to develop/select “perfect” ICT that don’t need to change, and instead focus on “ways to make work-arounds easier for users to create, document and share” (Koopman & Hoffman, 2003, p. 74) through organisational “settings, and systems – arranged so that invention and prototyping by end-users can flourish” (Ciborra, 1992, p. 305).

This appears to involve at least two steps in terms of technologies

  1. Providing technologies etc. that make it easier to scratch an itch (create).
    e.g. an institution providing appropriate APIs for its data and services.
  2. Providing technologies etc. that make it easier to share “scratches” (document and share)
    e.g. something like User scripts for Greasemonkey. A place where people from across the institution (and perhaps broader) can share, comment and rate “scratches”

The threat of mobile and other closed systems?

In this example and all the other examples we’ve develop so far we have relied on Greasemonkey to provide the duct tape that we use to renovate the institutional systems. Greasemonkey works in a Web environment. The increasing push to mobile devices and away from the web has some implications for this type of work. Specific apps are not a very open system, they don’t offer much in the way of affordances. The apparent death of RSS is indicative of a preference for some technology providers for integrated/closed/established systems, rather than open systems.

Growing digital renovators and builders

Since affordance is a relational concept, simply providing the technology isn’t enough. Every institution will have their own “Fred in the shed” or “Lone ranger” that have been scratching their own itch. But the majority of staff can’t, don’t or won’t. The 2015 Horizon reports identify the apparent low digital fluency of academics as a major impediment to the quality use of ICT in learning and teaching (Johnson, Adams Becker, Cummins, & Estrada, 2014). If academics are perceived as not being able to use the current technology, what hope do they have for being able to modify it?

One perspective might be to consider the implications of what it means for affordance to be relational. Blaming the limited digital fluency of academics is only looking at one side of the relationship. Sure, there may be an issue with digital fluency and the availability heuristic means that just about everyone has a story about a “dumb academic” to illustrate this. However, there may also be an issues (as shown above) with the quality of the technologies (and the support systems, processes and policies surrounding those technologies) they are required to use.

There’s also a question about whether or not current definitions of digital fluency are sufficient. Is a focus on the ability to use the institutionally provided ICT effectively sufficient for fluency? It might appeal to institutional senior management who just want their staff to use the ICT they have been provided with. But, as argued above ICT are protean, a key advantage of ICT is the ability to be modified and changed to suit requirements. Is there a case to be made that being digitally fluent may extend to include the ability to modify ICT to suit your purpose?

Not to suggest that everyone needs to have an undergraduate degree in software engineering. Not everyone in the digital world needs to be a digital builder. But perhaps being digitally fluent does mean that you should have some skills in digital renovation. This capability may be especially important if you are a teacher. Shulman (1987) described the distinguishing knowledge of a teacher as the capacity

to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

If learning and teaching are increasingly taking place in a digital world, then having the ability to transform content knowledge into pedagogically powerful yet adaptive forms would seem to imply some capacity as a digital renovator.

Of course this capacity should not be limited to teachers. Learners and learning in a digital world would also seem likely to benefit from the capacity to digitally renovate their digital spaces. Which in turn brings up the question of “their digital spaces”. Not digital spaces provided by the institution, but spaces that learners and teachers own and control.

This touches on some of the issues raised by @audreywatters in The Future of Education: Programmed or Programmable. The differences between the Established and Affordances view of ICT seems to be a partial contributor to the Programmed Instructions and Programmable Web models of ed-tch Audrey mentions. The Established view “hard-codes” the functionality and content of learning. The Affordances view sees ICT as a space for co-creation and connection.

Agency versus structure

In my current institution it’s quite common for the topic of ICT to arise amongst any gathering of staff. The tone of the conversation is almost always negative. It doesn’t work. Things break at the worst possible time and there are sessions of woe sharing arising from the common experience of make work arising from limitations of systems. There are on-going complaints about yet another change in this system, an upgrade of Moodle, a new assignment management system or a new look and feel for course sites. A litany of change being done to staff because of the needs of technology. Technology is shaping us.

Such structure would appear to be sucking the agency of teaching staff. Capable and confident face-to-face teaching staff are struggling with systems that require their teaching practice to be shoe-horned into the inappropriate capabilities offered by the functionality of institutional systems. Teaching staff can see what they’d like to be able to do better, but are unable to do it. The ICT is shaping them and their practice with no apparent capacity to shape the digital space in which their learning and teaching takes place.

How widespread is this view? What impact does it have on their identity and capability as a teacher? What impact does it have on the quality of learning and teaching?

What would happen if they had the capacity to shape the ICT? Even if just a little bit more. Would that increased sense of agency add something to the quality of learning and teaching?

Not ready for digitally fluent staff

The Horizon Report’s identification of limited digital literacy of academic staff as a significant barrier begs a range of questions. Accepting the premise and putting aside questions over what it means to digitally literate or fluent I wonder: Are universities ready for digitally fluent staff? Would digitally fluent staff be will to accept an organisation having an Established view of ICT, or would they expect an Affordances view of ICT?

You want digitally fluent faculty?

References

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition. Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-horizon-report-higher-ed

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1249172

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–21. Retrieved from http://her.hepg.org/index/J463W79R56455411.pdf

Some more tweaks to gradebook

This is a development log of a few additions to the recent fixes to the Peoplesoft gradebook. The following documents attempts to implement the following

  1. Highlight students in the supp range DONE
    Students with a mark between 44.5 and 49.5 need the grade to be set to IM and a note inserted.
  2. PE overrides DONE
    Courses with a professional experience component need to have the default FNC over-ridden when they don’t have a PE mark yet. And a comment also needs inserting.

The second is harder because the page being updated doesn’t contain the PE mark. A bit more challenging to implement.

And thanks to a “communique” there’s a more complete set of “guidelines on the allocation of marks and final grades” which lists

  1. Round up when total marks a close to grade cut-offs – DONE.
  2. Review where the total marks a close to the passing grade cut-off.

    This is the “supp” range task.

  3. Review where the total marks are close to higher grade cut-offs.

    This appears to be a duplication of #1, but includes the phrase “review the performance”. I wonder why if the mark has already been rounded up?

  4. Allocation of supplementary grades. – DONE

    Any student within 5% below the passing mark and who has complete all assessment can be given a supplementary IS/IM. Similar to the “PE Overrides” above in terms of how it would have to work.

Left to do

  1. Handle the process of saving
    Once you save the gradebook, it won’t run the update again to show the changes. More kludging to do I feel. Appears to be due to the fact that the “saving” process takes a long time and this defeats the “pause before running” kludge currently used to update the rows.
  2. Give advice on the different types of fails.
    If a student has submitted all assessment items but failed the course, they should get an F.
    If they submitted some, but not all assessment items, then the grade should be a FNC.
    If they didn’t submit any assessment items, the grade should be a FNP.

    The process used to check for PE could be expanded to handle this.

  3. Checking on institutional MOE
    Can they install Firefox/Greasemonkey?
  4. Checking on the process used by others
    The sequence of steps I use in the gradebook may not match what others use. Observe what they do.

Supp range

Basic task is to

  1. Recognise students in the grade range.
    Easy.
    [code lang=”javascript”]
    } else if ( isSuppRange( rawResult )) {
    changeBackground( element, studentNum, "#FFCC00" );
    }
    [/code]

  2. Change the background color.
    Let’s go #FFCC00.
  3. Add in some reminder about adding a note?
    The div win0divSTDNT_GRADE_HDR_EMPLID$3 (where 3 is the student number) seems like a good place to add the warning/explanation.

    Have identified the element in the script. Need to figure out how to add some text into it. Showing up as XrayWrapper object HTMLDivElement. Ahh, just how simple it is when you aren’t ignorant.
    [code lang=”javascript”]
    element.getElementById( id ).insertAdjacentHTML(‘beforeend’, newHTML );
    [/code]

  4. Check all the options
    • Award a supplementary grade DONE
    • Upgrade on the border line – no change DONE
    • Upgrade on the border line – change made DONE

PE overrides

The requirement here is to

  1. Detect any student that doesn’t have a result for “practicum”.
  2. Advise that the mark needs to change to a “result outstanding”.

This is complicated because the page on which staff can change the result, does not contain any information about the practicum result. It does appear on the first gradebook page displayed, but not the change page. So the script will need to

  1. Save the practicum result for all the students on the first page.
    It appears that the GM_setValue function is a way to do this. When the first page is loaded, the values will need to be extracted and stored. So sub-steps

    • Detect that the first page has been loaded.
      So how to identify the first page?

      Actually, not really interested in if it is the first page. Just look for if it has the word PRACTICUM in the header of a specific table.

      [code lang=”javascript”]
      var description = 1;
      var id = "DERIVED_LAM_ASSIGNMENT_DESCR_" + description + "$0";
      var name = frame.getElementById( id );

      // loop through all the assignment descriptions for first row
      while ( name ) {
      var rawResult = name["textContent"];

      if ( rawResult == "PRACTICUM" ) {
      return true;
      }
      [/code]

    • Extract the practicum values
      Will need to extract the column number for the practicum results …it will be located in an input box with the id DERIVED_LAM_GRADE_1$0 where 1 is the column (first column with results – and matching the column in which practicum was found) and 0 is the student number in the row.

      Will need to extract the matching ID number so that the practicum result is saved for that student. The student’s ID number is located in a span with the id STDNT_GRADE_HDR_EMPLID$0

      [code lang=”javascript”]
      var studentNum = 0;
      var peResultID = "DERIVED_LAM_GRADE_" + column + "$" + studentNum;
      var peResultElement = frame.getElementById( peResultID );
      var studentID = "STDNT_GRADE_HDR_EMPLID$" + studentNum;
      var studentElement = frame.getElementById( studentID );

      // loop through all the rows in the table
      while ( peResultElement ) {
      var rawResult = peResultElement.value;
      var studentRaw = studentElement["textContent"];

      var id = "STUDENT_peResult_" + studentRaw;
      GM_setValue( id, rawResult );
      [/code]

    • Save them.
      The question now is how and what to save. Perhaps the aim here is only to save those students who do NOT have a result? Or should we save them all? i.e. actually save a value for all STUDENT_peResult_id
      The code above has the modified version.
  2. Use that stored information on the change page.
    When we’re checking the other page, need to add in a getValue call to test it.
  3. Should think about deleting the values when the script is on the first page. Just to make sure there aren’t any left overs? But then if you have to go to this page first, then it should be ok as it gets overwriten.
    [code lang=”javascript”]
    var keys = GM_listValues();
    for ( var i=0; i < keys.length; i++ ) {
    var t = GM_getValue( keys[i] );
    if ( keys[i].match( /^STUDENT_peResult_/ ) ) {
    GM_deleteValue( keys[i] );
    }
    }
    [/code]
  4. PE results missing for all.
    PE results don’t get entered until late in the process. So if you visit the gradebook before this you get the PE result over-riding everything else. Need to prevent this from happening. i.e. if there are no PE results for anyone, then don’t display this warning.
    Done.