Assembling the heterogeneous elements for (digital) learning

Month: July 2013

BIM, Moodle, Simplepie, curl and HTTP proxy issue

Good news this week. BIM got into the institution’s testing site for Moodle. One step close to going live. The bad news is that there were a couple of issues to resolve. This post is a record of the attempt to address the big one (successfully as well).

The problem

When you attempt to register the blog for a student, BIM/Moodle generates this error

Unable to access the URL you provided

http://davidtjones.wordpress.com
The error created was
cURL error 28: connect() timed out!

It appears that it doesn’t play nicely with the institutional HTTP proxy. I had noticed this same problem with the development install of Moodle on my laptop, but had thought that was simply my bad practice.

Seems the problem may be a little more than that.

The plan

The rough plan is

  1. Find out if this a known problem?
  2. Does this problem effect other Moodle tools that rely on SimplePie?
  3. Is there an identifiable difference between what BIM and those other tools?

A known problem

A search for “moodle simplepie proxy” and similar doesn’t reveal a lot. (Simplepie is the 3rd party library that used to search for, parse and generally work with feeds.

You get this from GitHub which shows the Moodle modified version of GitHub. It includes evidence of modifications to SimplePie to

make sensible configuration choices, such as using the Moodle cache directory and curl functions/proxy config for making http requests in line with moodle configuration

There is also this closed issue on the Moodle tracker where there was a problem with a proxy that requires authentication. It’s been fixed and the fix should be in the versions of Moodle we’re using here. Also, I don’t believe the institutional proxy fits this problem. In fact, the error is very different.

Does it effect other Moodle tools?

There’s a “register an external blog” facility in Moodle. It connects the external blog to the users Moodle blog (I believe).

I do find it interesting that this asks the user to enter the Feed URL and not the blog URL. SimplePie does a good job of finding feeds from a blog URL (in my experience). Have just checked the code and it does use SimplePie.

Using this to register a URL without having configured the HTTP proxy results in a long period of waiting and then the error “This feed is invalid”. Seems to suggest some limitations of the code. Wish I had the time to look at this more.

Configure my box with the proxy details and try again. Oops, that didn’t work. Ahh, “Some settings were not changed due to an error” an error message when saving the HTTP proxy that didn’t exactly leap out at me. Not immediately obvious what the error was.

Checking the database (mdl_config) reveals that the proxyhost wasn’t set, apparently you don’t need the “http://” and the error doesn’t identify that. Fixed.

Okay, that works. External blog registered. And posts from the blog showing up in my Moodle “blog”.

Let’s try BIM now. Nope. The timeout problem again. Implying there’s something different going on here.

Is there a difference?

Yes, eventually tracked down one of the calls to SimplePie is using the normal SimplePie class and not the moodle simple pie class. Hence not using the proxy setup.

Tested it with the student registering process and that works as well.

How are they feeling – Semester 2 – Part 1

The following is a repeat of this post for a different offering of the same course. It’s also a quick how to primarily intended for the students in the course. Summary and comparison first, then the how to.

Doing this now mainly in preparation for a session with the students tonight.

Summary

I’ll focus here on answers to “How do you feel about the course”. A question from which the students can select from a provided list of words (Interested, relaxed, worried, successful, confused, clever, happy, bored, rushed) and add their own.

A word cloud based on the students responses at the end of the first week of the Semester 1 offering of the course looked like this.

How are you feeling?

The word cloud for the semester 2 responses near the end of week 1 (a much smaller sample – semester 1 = 121 students, semester 2 – 17, so far) looked like this.

Feeling - EDC3100 Semester 2, Week 1

Looks like there’s some improvement. Stressed and overwhelmed aren’t present and these were optional words folk could add. Confusion and worry – provided choices – are still present. As is a feeling of being rushed. So, still a challenge, but perhaps a bit better?

Of course, these conclusions are based on a much smaller sample and there are some significant sources of bias. I’ll mention just two of those. First, this represents the 17 early starters, those who are keen and got started quickly and potentially had fewer problems. It’s possible that the 90 or so students still to complete the tasks may be feeling very different. It’s probably a biased sample. Second, my conclusion is based very much on my own beliefs about the course. I’ve redesigned the week’s activities to be more manageable, so I’m looking for that to be reflected in the data.

For these reasons, will be interested in hearing what others – especially the students – perceive from the above (if anything).

How to

I wouldn’t go into a hug amount of detail. A quick search online will reveal a good collection of tutorials on most of the following.

The process for this was

  1. Set up the Google form.

    Google forms provide the interface students use to respond to the “survey”. It helps gather the data.

  2. Extract the responses from the Google spreadsheet.

    A simple copy and paste into a text document.

  3. Feed them into Tagxedo.

    In this case, a simple copy and paste into tagxedo. Choose a circle shape, horizontal orientation. All good.

  4. Export the Word cloud and upload to Flickr.

IRAC – Four questions for learning analytics interventions

The following is an early description of work arising out of The Indicators Project an ongoing attempt to think about learning analytics. With IRAC (Information, Representation, Affordances and Change) Colin Beer, Damien Clark and I are trying to develop a set of questions that can guide the use of learning analytics to improve learning and teaching. The following briefly describes:

  • Why we’re doing this?
  • Introduces some of our assumptions.
  • Touches on the origins of IRAC.
  • Describes the four questions.
  • A very early and rough attempt to use the four questions to think about existing approaches to learning analytics.

Why?

The spark for this work is based on observations made in a presentation from last year. In summary, the argument is that learning analytics has become a management fashion/fad in higher education and how this generally means most implementation of learning analytics is not likely to be very mindful. In turn it is very likely to be limited in its impact on learning and teaching. Having much in common with the raft of expenditure in data warehouses some years ago. Let alone examples such as: graduate attributes, eportfolios, the LMS, open learning, learning objects etc. It would nice to avoid this yet again.

There are characteristics of learning analytics that make the difficulties associated with developing appropriate innovations beyond the faddish adoption of analytics. One of the major contributors is that the use of learning analytics encompasses many different bodies of literature both within and outside learning and teaching. In fact, many of these different bodies of literature have developed important insights that can directly help inform the use of learning analytics to improve learning and teaching. What’s worse is that early indications are that – not unsurprisingly – most institutional projects around learning analytics are apparently ignorant of the insights and lessons gained from this prior work.

In formulating IRAC – our four questions for learning analytics interventions – we’re attempting to aid institutions consider the insights from this earlier work and thus enhance the quality of their learning analytics interventions. We’re also hoping that these four questions will inform our attempts to explore the effective use of learning analytics to improve learning and teaching. For me personally, I’m hoping this work can provide me with the tools and insights necessary to make my own teaching manageable, enjoyable and effective.

Assumptions

Perhaps the largest assumption underpinning the four questions is that the aim of learning analytics interventions is to encourage and enable action by a range of stakeholders. If no action (use) results from a learning analytics project, then there can’t be any improvement to learning and teaching. This is simlar to the argument by Clow (2012) that the key to learning analytics is action in the form of appropriate interventions. Also, Elias (2011) describes two steps that are necessary for the advancement of learning analytics

(1) the development of new processes and tools aimed at improving learning and teaching for individual students and instructors, and (2) the integration of these tools and processes into the practice of teaching and learning (p. 5)

Earlier work has found this integration into practice difficult. For example, Dawson & McWilliam (2008) identify a significant challenge for learning analytics being able to “to readily and accurately interpret the data and translate such findings into practice” (p. 12). Adding further complexity is the observation from Harmelen & Workman (2012) that learning analytics are part of a socio-technical system where success relies as much on “human decision-making and consequent action…as the technical components” (p. 4). The four questions proposed here aim to aid in the design of learning analytics interventions that are integrated into the practice of learning and teaching.

Audrey Watters’ friday night rant serves a slightly similar perspective more succinctly and effectively.

Foundations

In thinking about the importance of action and of learning analytics tools being designed to aid action we were led to the notion of Electronic Performance Support Systems (EPSS). EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin et al., 2001, p. 658). EPSS are computer-based systems that “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001. p. 63).

All well and good, in reading about EPSS we came across the notion of the performance zone. In framing the original definition of an EPSS, Gery (1991) identifies the need for people to enter the performance zone. The performance zone is defined as the metaphorical area where all of the necessary information, skills, dispositions, etc. come together to ensure successful completion of a task (Gery, 1991). For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 550).

This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artifacts drawn from Norman’s (1993) book “Things that make us smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artifacts that “more often interferes and confuses than aids and clarifies” (p. 9).

Given our recent experience with institutional e-learning systems this view resonates quite strongly as a decent way of approaching the problem.

While the notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artifacts form the scaffolding for the four questions, additional insight and support for each question arises from a range of other bodies of literature. The description of the four questions given below includes very brief descriptions of some of this literature. There is significantly more useful insights to be gained and extending this will form a part of our on-going work.

Our proposition is that effective consideration of these four questions with respect to a particular context, task and intervention will help focus attention on factors that will improve the implementation of a learning analytics intervention. In particular, it will increase the chances that the intervention will be integrated into practice and subsequently have a positive impact on the quality of the learning experience.

IRAC – the four questions

The following table summarises the four questions with a bit more of an expansion below.

Label Question
Information Is all the relevant information and only the relevant information available and being used appropriately?
Representation Does the representation of this information aid the task being undertaken?
Affordances Are there appropriate affordances for action?
Change How will the information, representation and the affordances be changed?

Information

While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). Leading to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). Potential considerations include whether the information required is technically and ethically available for use? How is the information cleaned, analysed and manipulated during use? Is the information sufficient to fulfill the needs of the task? (and many, many more).

Representation

A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straight forward task (Norman, 1993). Representation has a profound impact on design work (Hevner et al., 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). How information is represented can make a dramatic difference in the ease of a task (Norman, 1993). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference access necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica, Stone, & Endicott, 2006, p. 540). Considerations here are how easy is it for people to understand and analyse the implications of the findings from learning analytics? (and many, many more).

Action

A poorly designed or constructed artifact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance then it must be utilised and be a good fit for the task it supports (Goodhue and Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, p. 106). The nature of such affordances are not inherent to the artifact, but instead co-determined by the properties of the artifact in relation to the properties of the individual, including the goals of that individual (Young et al., 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). The consideration here is whether or not the tool provides support for action that is appropriate to the context, the individuals and the task.

Change

The idea of evolutionary development has been central to the theory of decision support systems (DSS) sinces its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems or tools to under go change, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches through which the data was generated. Another factor is Bollier’s and Firestone’s (2010) observation that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Finally, is the observation that universities are a complex system (Beer et al. 2012). Complex systems require reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustaini, 2010). Potential considerations here include who is able to implement change? Which of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Using the four questions

It is not uncommon for Australian Universities to rely on a data warehouse system to support learning analytics interventions. This in part is due to the observation that data warehouses enable significant consideration of the information (question 1). This is not surprising given that the origins and purpose of data warehouses was to provide an integrated set of databases to provide information to decision makers (Arnott & Pervan, 2005). Data warehouses provide the foundation for learning analytics. However, the development of data warehouses can be dominated by IT departments with little experience with decision support (Arnott & Pervan, 2005) and a tendency to focus on technical implementation issues at the expense of user experience (Glassey, 1998).

In terms of consideration of the representation (question 2) the data warehouse generally provides reports and dashboards for ad hoc analysis and standard business measurements (van Dyk, 2008). In a learning analytics context, dashboards from a data warehouse will typically sit outside of the context in which learning and teaching occurs (e.g. the LMS). For a learner or teacher to consult the data warehouse requires the individual to break away from the LMS, open up another application and expend cognitive effort in connecting the dashboard representation with activity from the LMS. Data warehouses also provide a range of query tools that offer a swathe of options and filters for the information they hold. While such power potentially offers good support for change (question 4) that power comes with an increase in difficulty. At least one institution mandates the completion of training sessions to assure competence with the technology and ensure the information is not misinterpreted. This necessity could be interpreted as evidence of limited consideration of representation (question 2) and affordances (question 3). The source of at least some of these limitations arise from the origins of data warehouse tools in the management of businesses, rather than learning and teaching.

Harmelen and Workman (2012) use Purdue University’s Course Signals and Desire2Learn’s Student Success System (S3) as two examples of the more advanced learning analytics applications. The advances offered by these systems arise from greater considerations being given to the four questions. In particular, both tools provide a range of affordances (question 3) for action on the part of teaching staff. S3 goes so far as to provide a “basic case management tool for managing interventions” (Harmelen & Workman, 2012, p. 12) and has future intentions of using this feature to measure intervention effectiveness. Course Signals offers advancements in terms of information (question 1) and representation (question 2) by moving beyond simple tabular reporting of statistics, toward a traffic lights system based on an algorithm drawing on 44 different indicators from a range of sources to predict student risk status. While this algorithm has a history of development, Essa and Ayad (2012) argue that the reliance on a single algorithm contains “potential sources of bias” (n.p.) as it is based on the assumptions of a particular course model from a particular institution. Essa and Ayad (2012) go onto to describe S3’s advances such as an ensemble modeling strategy that supports model tuning (information and change); inclusion of social network analysis (information); and, a range of different visualisations including interactive visualisations allowing comparisons (representation, affordance and change).

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.

Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland (Ed.), . Weatherhead School of Management.

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Buckingham Shum, S. (2012). Learning Analytics. UNESCO. Moscow. http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Clow, D. (2012). The learning analytics cycle. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12, 134–138. doi:10.1145/2330601.2330636

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. http://learninganalytics.net/LearningAnalyticsDefinitionsProcessesPotential.pdf.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12 (pp. 2–5). Vancouver: ACM Press.

Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213. doi:10.2307/249689

Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Harmelen, M. Van, & Workman, D. (2012). Analytics for Learning and Teaching. http://publications.cetis.ac.uk/2012/516

Van Dyk, L. (2008). A data warehouse model for micro-level decision making in higher education. The Electronic Journal of e-Learning, 6(3), 235–244.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

A quick search for a Google reader alternative

Update: A student from last semester has shared her experiences from last semester using Feedly. The big limitation with Feedly is the absence of a search facility. But then it appears that this is a limitation of all the competitors as well (for now).

The second offering of The ICTs and Pedagogy course I teach starts next week. Last semester the course made a move to each student maintaining an individual blog on their choice of service. To encourage connections between students I generated a collection of OPML files and showed them how to use Google Reader to track what people are posting.

Of course, Google reader is now dead and an alternative recommendation is needed. The plan is to mention that there are a range of alternatives and that folk can choose their own, but then show some detail of how to use a particular feed reader. So I need to figure which one I’ll show the detail of. Due to time constraints, this will be a quick, rather than comprehensive search and test.

Here’s one list of alternatives and another more focused on education.

The planned test is

  1. Import an OPML of my subscriptions from NetNewsWire (my current reader).

    This is quite a large list so will be a reasonable test of the import process. Who knows, I might change from NetNewsWire.

  2. Use the read a bit.
  3. Check out mobile options.

BazQux

Was going to explore BazQux on a recommendation, but it appears to require payment. $9 a year is not that much, but not something I’m comfortable with requiring from the students.

Liked the ability to login with OpenID.

Imports the OPML with no worries.

Display is ok, but not brillant. And navigation works.

Feedly

Login goes straight to Google account. Wonder if this works with the students limited Google account that forms the basis of their institutional email?

OPML is imported okay. Presents an Organize page for moving the feeds and groups around. Apparently does drag and drop of feeds.

Interface is nicer. Has some missing bits, but looks alright.

Cost? Apparently, free for now and a paid version later.

There are versions for various platforms.

Given time constraints, that will probably do.

The most liked from this list of alternatives, but detested by the person who recommended BazQux.

When is learning analytics not about the students?

Sadly, but not surprisingly, I missed out on an invite to LASI 2013. But I am able to follow the conversation via other means such as the LASI blog aggregator. Via the aggregator I’ve come across this post from Mike Tissenbaum reflecting on the opening sessions.

In his post, Mike writes

Alyssa Wise mentioned that LA needs to be “learner centered”, which I think is vital, even as we begin to gather and process all of this data to make sense of it we need to remember that it’s about the students and all of our practices need to be focused on this and how we can help and enable learners to learn.

I don’t disagree that the ultimate aim of learning analytics is/should be learner centered. It’s learning we’d like to improve. However, I do want to suggest that there are contexts where learning isn’t the only consideration. This picks up a bit on the point Ian Reid made in comment on a prior post.

If you are talking about developing systems to help improve learning outcomes within a formal education system, then currently you have to consider the teachers. Since, for better or worse, teachers play a significant role in student learning. In this context, one way to improve the quality of learning is to help the teacher play their role more effectively. To successfully achieve this, you have to pay attention to the teacher, their context, their background, their abilities. Otherwise the system will fail to be used by the teacher.

I’ve seen lots of brilliantly designed information systems based on sound pedagogical principals fail because they focused on more on an idealised version of learning, rather than the reality of formal education.

Identifying and filling some TPACK holes

The following post started over the weekend. I’m adding this little preface as a result of the wasted hours I spent yesterday battling badly designed systems and the subsequent stories I’ve heard from others today. One of those stories revolved around how the shortening available time and the poorly designed systems is driving one academic to make a change to her course that she knows is pedagogically inappropriate, but which is necessary due to the constraints of these systems.

And today (after a comment from Mark Brown in his Moodlemoot’AU 2013 keynote last week) I came across this blog post from Larry Cuban titled “Blaming Doctors and Teachers for Underuse of High-tech tools”. It includes the following quote

For many doctors, IT-designed digital record-keeping is a Rube Goldberg designed system.

which sums up nicely my perspective of the systems I’ve just had to deal with.

Cuban’s post finishes with three suggested reasons why he thinks doctors and teachers get blamed for resisting technology. Personally, I think he’s missed the impact of “enterprise” IT projects, including

  • Can’t make the boss look bad.

    Increasingly IT projects around e-learning have become “enterprise”, i.e. big. As big projects, the best practice manual requires that the project be visibly led by someone in the upper echelons of senior management. When large IT projects fail to deliver the goods, you can’t make this senior leader look bad. So someone has to be blamed.

  • The upgrade boat.

    When you implement a large IT project, it has to evolve and change. Most large systems – including open source systems like Moodle – do this by having a vendor driven upgrade process. So every year or so the system will be upgraded. An organisational can’t fall behind versions of a system, because eventually they are no longer supported. So, significant resources have to be invested in regularly upgrading the system. Those resources contribute to the intertia of change. You can’t change the system to suit local requirements as all the resources are invested in the upgrade boat. Plus, if you did make a change, then you’d miss the boat.

  • The technology dip.

    The upgrade boat creates another problem, the technology dip. Underwood and Dillon (2011) talk about the technology dip as dip in educational outcomes that arises after the introduction of technological change. As the teachers and students grapple with the changes in technology they have less time and energy to expend on learning and teaching. When you have an upgrade boat coming every 12 months, then the technology dip becomes a regular part of life.

The weekend start to this post

Back from Moodlemoot’AU 2013 and time to finalise results and prepare course sites for next semester. Both are due by Monday. The argument from my presentation at the Moot was that the presence of “TPACK holes” (or misalignment) causes problems. The following is a slide from the talk which illustrates the point.

Slide14

I’d be surprised if anyone thought this was an earth breaking insight. It’s kind of obvious. If this was the case then I wouldn’t expect institutional e-learning to be replete with examples of this. The following is an attempt to document some of the TPACK holes I’m experiencing in the tasks I have to complete this weekend. It’s also an example of recording the gap outlined in this post.

Those who haven’t submitted

Of the 300+ students in my course there are some that have had extension, but haven’t submitted their final assignment. Most likely failing the course. I’d like to contact them and double check that all is ok. I’m not alone in this, I know most people do it. All of my assignments are submitted via an online submission system, but there is no direct support in this system for this task.

The assignment system will give me a spreadsheet of those who haven’t submitted. But it doesn’t provide an email address for those students, nor does it connect with other information about the students. For example, those who have dropped the course or have failed other core requirements. Focusing on those students with extensions works around that requirement. But I do have to get the email addresses.

Warning markers about late submissions

The markers for the course have done a stellar job. But there are still a few late assignments to arrive. In thanking the markers I want to warn them of the assignments still to come, but even with only less than 10 assignments to come this is more difficult than it sounds due to the following reasons

  • The online assignment submission treats “not yet submitted” assignments as different from submitted assignments and submitted assignments is the only place you can allocate students to markers. You can’t allocate before submission.
  • The online assignment submission system doesn’t know about all the different types of students. e.g. overseas students studying with a university partner are listed as “Toowoomba, web” by the system. I have to go check the student records system (or some other system) to determine the answer.
  • The single sign-on for the student records system doesn’t work with the Chrome browser (at least in my context) and I have to open up Safari to get into the student records system.

Contacting students in a course

I’d like to send a welcome message to students in a course prior to the Moodle site being made available.

The institution’s version of Peoplesoft provides such a notify method (working in Chrome, not Safari) but doesn’t allow the attachement of any files to the notification.

I can copy the email addresses of students from that Peoplesoft system, but Peoplesoft uses commas to separate the email addresses meaning I can’t copy and paste the list into the Outlook client (it expects semi-colons as the separator).

Changing dates in a study schedule

Paint me as old school, but personally, I believe there remains a value to students of having a study schedule that maps out the semester. A Moodle site home page doesn’t cut it. I’ve got a reasonable one set up for the course from last semester, but new semester means new dates. So I’m having to manually change the dates, something that could be automated.

Processing final results

As someone in charge of a course, part of my responsibilities is to check the overall results for students, ensure that it’s all okay as per formal policy and then put them through the formal approval processes. The trouble is that none of the systems provided by the institution support this. I can’t see all student results in a single system in a form that allows my to examine and analyse the results.

All the results will eventually end up in a Peoplesoft gradebook system. In which the results are broken up based on the students “mode” of learning i.e. one category for each of the 3 different campuses and another for online students. But from which I cannot actually get any information out of in a usable form. It is only available in a range of different web pages. If the Peoplesoft web interface was halfway decent this wouldn’t be such a problem, but dealing with it is incredibly time consuming. Especially in a course with 300+ students.

I need to get all the information into a spreadsheet so that I can examine, compare etc. I think I’m going to need

  • Student name, number and email address (just in case contact is needed), campus/online.

    Traditionally, this will come from Peoplesoft. Might be some of it in EASE (online assignment submission).

  • Mark for each assignment and their Professional Experience.

    The assignment marks are in EASE. The PE mark is in the Moodle gradebook.

    There is a question as to whether or not the Moodle gradebook will have an indication of whether they have an exemption for PE.

EASE provides the following spreadsheets, and you’re not the only one to wonder why these two spreadsheets weren’t combined into one.

  1. name, number, submission details, grades, marker.
  2. name, number, campus, mode, extension date, status.

Moodle gradebook will provide a spreadsheet with

  • firstname, surname, number…..email address, Professional Experience result

Looks like the process will have to be

  1. Download Moodle gradebook spreadsheet.
  2. Download EASE spreadsheet #1 and #2 (see above) for Assignment 1.
  3. Download EASE spreadsheet #1 and #2 (see above) for Assignment 2.
  4. Download EASE spreadsheet #1 and #2 (see above) for Assignment 3.
  5. Bring these together into a spreadsheet.

    One option would be to use Excel. Another simpler method (for me) might be to use Perl. I know Perl much better than Excel and frankly it will be more automated with Perl than it would be with Excel (I believe).

    Perl script to extract data from the CSV files, stick it in a database for safe keeping and then generate an Excel spreadsheet with all the information? Perhaps.

Final spreadsheet might be

  • Student number, name, email address, campus/mode,
  • marker would be good, but there’ll be different markers for each assignment.
  • a1 mark, a2 mark, a3 mark, PE mark, total, grade

An obvious extension would be to highlight students who are in situations that I need to look more closely at.

A further extension would be to have the Perl script do comparisons of marking between markers, results between campuses, generate statistics etc.

Also, probably better to have the Perl script download the spreadsheets directly, rather than do it manually. But that’s a process I have’t tried yet. Actually, over the last week I did try this, but the institution uses a single sign on method that involves Javascript which breaks the traditional Perl approaches. There is a potential method involving Selenium, but that’s apparently a little flaky – a task for later.

Slumming it with Peoplesoft

I got the spreadsheet process working. It helped a lot. But in the end I still had to deal with the Peoplesoft gradebook and the kludged connection between it and the online assignment submission system. Even though the spreadsheet helped reduce a bit of work, it didn’t cover all of the significant cracks. In the absence of better systems, these are cracks that have to be covered over by human beings completing tasks for which evolution has poorly equipped them. Lots of repetitive, manual copying of information from one computer application to another. Not a process destined to be completed without human error.

Powered by WordPress & Theme by Anders Norén

css.php