Assembling the heterogeneous elements for (digital) learning

Month: October 2012

Technology in education: The track record

The following bit of pseudo-code is from van Dam (1999) and strongly matches my experience with educational technology, especially within universities.

for each new technology
{
   euphoria and hype

   struggle to produce material for the new medium

   mature judgement

   disappointment and cynicism

   wait for the next technology that will be the answer
}

Though I must admit that I’m finding it harder not to immediately jump to disappointment and cynicism.

And the author – Andries van Dam – has a bit of history with educational technology. Though the struggle to produce material for the new medium step does seem to suggest a certain type of perspective of education/teaching.

Update

And don’t forget Birnbaum’s fad life-cycle and the Gartner Hype Cycle.

References

van Dam, A. (1999). Education: the unfinished revolution. ACM Computing Surveys, 31(4es).

Learning analytics: Anything more than just another fad?

I’m currently thinking about a potential contribution to the SoLAR Southern Flare Conference on Learning Analytics in about a month or so. The early shape of that contribution is online and my last few posts have been summarising some explorations through various areas of the literature. Not the learning analytics literature, but broader literature on which much of the learning analytics literature should be based, but of which much isn’t.

That reading has me increasingly pessimistic about the end result of learning analytics moves within Australian universities. It will be just another fad. Does anyone feel more positive?

Sure, there will be some neat examples of where it is used. But unless some lessons are learned quickly, I don’t see it being widely used beyond a few motivated folk and the odd senior manager who makes a right balls up because of it.

Some illustrative quotes

A field of research within the Information Systems discipline with a history stretching back to the 1970s. It’s a field that includes business intelligence, data warehousing and a range of other explorations of the use of technology to aid organisational decision making. Arnott and Pervan (2005) provide a critical analysis of the fields and have some nice comments.

As a result data warehouse development is dominated by central IT departments that have little experience with decision support. A common theme in industry conferences and professional books is the rediscovery of fundamental DSS principles like evolutionary development (Keen, 1997).

Business intelligence (BI) is a poorly defined term and its industry origin means that different software vendors and consulting organizations have defined it to suit their products; some even use ‘BI’ for the entire range of decision support approaches

Hosack et al (2012, p. 321) add this

Additionally, the best DSS cannot overcome poor managerial decision making.

Some more from Houghton and Mackrell (2012)

Specifically we found that existing patterns of sensemaking hindered the data quality of the BI system because of how key people made sense of their work. We argue that because there was divergence in sensemaking patterns in the social systems, the data collected may not represent a true picture of ‘business
intelligence’

More recently there is this article on “Big Data Fail” which starts with the claim

Much of the great promise of business intelligence (BI) goes unrealized because decision makers aren’t using the decision support systems in any meaningful way. The vast majority of big data and business analytics projects implemented by normal companies suffer from chronic underuse.

And then proceeds to explain the traditional problem with information systems. The IT folk building systems no-body uses because the systems don’t do what the users need them to do.

Talk about deja vu all over again.

Not to mention Macfadyen and Dawson’s (2012) experience

the reality that the institutional planning process was nonetheless dominated by technical concerns, and made little use of the intelligence revealed by the analytics process.

“dominated by technical concerns”, sounds like a good prediction for the roll out of learning analytics in at least some Australian universities.

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87.

Hosack, B., Hall, D., Paradice, D., & Courtney, J. F. (2012). A Look Toward the Future : Decision Support Systems Research is Alive and Well. Journal of the Association for Information Systems, 13(Special Issue), 315–340.

Houghton, L., & Mackrell, D. (2012). The impact of individual, collective and structural Sensemaking on the usefulness of business intelligence data. MCIS 2012.

Macfadyen, L., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15(3), 149–163.

The quest to make sense of information: A research commentary

The following is a summary of / reflection upon

Caesarius, L., & Lindvall, J. (2011). The quest to make sense of information: A research commentary. Nordic Academy of Management Conference 2011 (pp. 1–7). Stockholm, Sweeden.

I found a reference to the paper online and the authors have been kind enough to share a copy of the paper. It was of specific interest to me as I’m exploring the limitations of learning analytics.

The abstract for the paper is

Debates on the so-called issue of ‘Big Data’, especially those in the business press, tend to over-emphasize potential benefits focusing almost exclusively on the promise of turning data and information into actionable knowledge. Accordingly, operating in information-rich environments provides firms opportunities to engage in analytic processes drawing on data and information to gain new intelligence (knowledge). New intelligence is the supposed end product of such processes that include among other things the identification of patterns, the creation of scenarios, the testing of models, prediction making and the prescription of actions. Such descriptions, as straightforward as they may sound, do not parallel reality, which is far more complex and difficult, and above all dependent on aspects that tend to escape the attention of many debaters.

Yet, although a ‘neologism’, the issue of ‘Big Data’ is part of larger debate on firms’ efforts to make sense of information. As such it connects to more diachronic issues in research such as for instance decision making, information system support and knowledge management. But the debate needs to be balanced; potentials need to be investigated in light of their challenges.

In this commentary paper we seek to add to the debate on ‘Big Data’ and on firms’ quest for making sense of data and information. We do so by attempting to explicate the challenges associates with such endeavors. Our main arguments are that although ‘Big Data’ may hold potential to support firms’ information sense-making processes, firms’ methodological and epistemological directives condition this potential. The former concerns the level of scientification, i.e. how much firms are relying on and accommodating for the use of scientific methodologies and knowledge to produce, make sense of and use information in a highly disciplined, systematized, structured and experimental manner. The latter relates to who is given interpretation priority in the analytic process effectively forming that which the firm is to act on, namely knowledge. Based on these findings we propose a set of areas for further research on subject matter.

Summary

The authors have said that the paper is a work in progress. So, there is room for improvement. However, it does point to some of the broader philosophical and historical considerations that don’t seem to have been broadly considered by the learning analytics cloud.

A couple of useful distinctions that I’m hoping to build upon.

The call for design science research is interesting.

Introduction

Since the industrial revolution firms have operated in information poverty. That’s changing. Leading to big data.

The centre of the big data debate is the possibility “to turn data and information into actionable knowledge (Dedrick et al, 2003; Zammutto et al., 2007)”.

Which raises a whole range of questions around who and with what perspectives is that information being analysed the action taken.

While firms are rich in data and information, they are knowledge poor.

Paper structure

  • Working definition of big data and its driving factors.
  • Analysis of cited potentials and advantages.
  • Analysis of critical aspects and challenges of turning data to information to knowledge.
  • Research implications – three main avenues.

Constitution of big data

Big Data

refers to the complexity of information in terms of the voluminous sets of data and information, to their extreme velocity, increased granularity and their widely varied formats (structured and unstructured).

Drivers

  1. “process of digitization” arising from the “infusion of information technology into the organisation (Zuboff, 1988)”
  2. The Internet and the subsequent expansion of digitization beyond the boundaries of the firm.
  3. “expansion of proprietary data in organisations due to the increased granularity and frequency by which this data is produced, collected and used”
    There seems to be a fine line between this and driver #1.
  4. Increased use of (social) media channels.
  5. Detect-and-respond technology built into devices connected to networks – e.g. RFID tags.

Drivers that seem to be limited to technology drivers. The increasing drive to quantifying, benchmarking, quality assurance, managerialisation etc seem to me to be important drivers.

Potential of big data

Analysis of data will generate value. Firms can become “intelligent enterprises”. Focus attention. Optimise operations. By sharing information employees become more empowered.

Big data doesn’t change business directly, but indirectly through analysis. This visualisation allows organisations to be more transparent and integrated.

Apparently it will allow higher level of experimentation in matters of exploration and exploitation.

Challenges with big data

“Despite advances in IT current technical solutions have yet to replicate much less to replace human analytical capacity”. links to the aims of AI research and the quote

as noted by Kallinikos (2001:37): “[t]he project of constructing intelligent machines has helped to reveal the immense complexity of human beings”.

Points to the work in Management Information Systems, Executive Information Systems and Decision Support Systems at building systems to aid decision making and action taking. One explanation for the failure of this work is their foundation on “an unrealistic understanding of human cognition”. In particular “the assumption of the rational actor.”

It’s a pity that this line of though isn’t followed up more.

Two major interconnected challenges:

  • methodological – how organisations set up their analytics processes in terms of structure, and
  • epistemological – how they carry them out in practice.

methodological = the way analytical processes are set up and how they are conducted as a layer that covers all parts of the firm’s operations. Can organisational practice/structure be transformed in response to analytics?
Increased level of “scientification” requires new skills oft missing from organisations. That are employed continuous basis.

Given the diversity of the underlying sets of data and information and of their sources these analytical processes need to take into account the inherent cognitive variations and different lifeworlds (Bruner, 1986; 1990; Lloyd, 2007).

“who is given interpretive priority”.

Corespondence theory/realism/positivism is “ontologically the current dominant perspective within existing IT-research. More on other perspectives before leading to

there is a need to develop inter- subjective perceptions or shared understanding / cross understanding (Huber and Lewis, 2010).

epistemologically talks about “know that” and “know how”. Prior emphasis on the former and the need to consider the latter. And the implication that has as knowledge becomes more ephemeral (Cook and Brown, 1999).

Conclusions and new avenues for research

Mentions historical debate on the meaning of information and a need to consider more than the dominant model “..define information in terms of what it is” to another perspective that “defines it in terms of what it does” (Hayles, 1999, p. 56). i.e. it is situated, strongly related to a context and a specific place.

ICT is oriented towards interaction, “by its nature it is relational. Many of these important relations are not linear – more information is not always better to understand a phenomena”.

Since interpretation is crucial, actor(s)/users become more important.

Trade-off between high level abstractions and narrative particularlism.

I’m interested in this bit

Most of all we see a need for studies based up on Design Science. It is too modest an ambition in a dynamic world to answer a research question with a traditional positive ambition (”is”); we need as researchers in this important area to also have an idea about ”possible worlds” (”to be”).

Literature to look at

Arnott, D. and G. Pervan, (2005), “A critical analysis of decision support systems research”, Journal of IT, Vol. 20, No. 2, pp. 67–87.

Brown, J. S. and P. Duguid, (2000), The Social Life of Information, Boston, MA: Harvard Business School Press.

Cook, S. D. N and Brown, J. S., (1999), ”Bridging Epistemologies: The Generative Dance between Organizational Knowledge and Organizational Knowing”, Organization Science, Vol. 10, No. 4, pp. 381- 400.

Dreyfus, H. L. and Dreyfus, S. E., (1986), Mind Over Machine; The Power of Human Intuition and Expertise in the Era of the Computer, New York, NY: Free Press.

Hodgkinson, G. P. and Healey, M. P., (2008), “Cognition in Organizations”, Annual Review of Psychology, Vol. 59, pp. 387-419.

Kallinikos, J., (2011), Governing Through Technology: Information Artefacts and Social Practice, London, UK: Palgrave Mcmillan.

Monod, E., and Boland, R. J., (2007), ”Editorial. Special issue on philosophy and epistemology: A Peter Pan Syndrome?”, Information Systems Journal, Vol. 17, No. 2, . 133-141.

Orlikowski, W. J., (2002), ”Knowing in Practice; Enacting a Collective Capability in Distributed Organizing”, Organization Science, Vol. 13, No. 3, pp. 249-273.

Weick, K. E., (2008), “Information Overload Revisited”, in Hodgkinson, G. P. and Starbuck, W. H., [Eds.], The Oxford Handbok of Organizational Decision Making, New York, NY: Oxford University Press.

Zammuto, R. F., Griffith, T. L., Majchrzak, A., Dougherty, D. J., Faraj, S., (2007), ”IT and the changing fabric of organization”, Organization Science, Vol. 18, No. 5, pp. 749-762.

Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan

The following has some reflection/questions generated while reading

Macfadyen, L. P., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15 (3), 149–163.

The abstract for the paper is

Learning analytics offers higher education valuable insights that can inform strategic decision-making regarding resource allocation for educational excellence. Research demonstrates that learning management systems (LMSs) can increase student sense of community, support learning communities and enhance student engagement and success, and LMSs have therefore become core enterprise component in many universities. We were invited to undertake a current state analysis of enterprise LMS use in a large research-intensive university, to provide data to inform and guide an LMS review and strategic planning process. Using a new e-learning analytics platform, combined with data visualization and participant observation, we prepared a detailed snapshot of current LMS use patterns and trends and their relationship to student learning outcomes. This paper presents selected data from this “current state analysis” and comments on what it reveals about the comparative effectiveness of this institution’s LMS integration in the service of learning and teaching. More critically, it discusses the reality that the institutional planning process was nonetheless dominated by technical concerns, and made little use of the intelligence revealed by the analytics process. To explain this phenomenon we consider theories of change management and resistance to innovation, and argue that to have meaningful impact, learning analytics proponents must also delve into the socio-technical sphere to ensure that learning analytics data are presented to those involved in strategic institutional planning in ways that have the power to motivate organizational adoption and cultural change.

Summary

Tells the story of a “learning analytics” analysis of existing LMS usage didn’t influence considerations at a Canadian university (I’m guessing it was UBC) around selecting a new LMS. Argues that such strategic considerations are important and that learning analytics can and should inform such considerations. Draws on change management literature, Rogers diffusion theory, other analytics literature and the nature of the university culture/context to explain why this may not have happened and what might be required to change.

While recognising the assumption of the importance of the strategic approach, I tend to think it is the fundamental assumptions of such an approach – especially in educational technology – and their almost complete mismatch with both the university and broader context that may be at fault here.

The authors were operating in the confines of the strategic approach. However, given that they cite McWilliam (2005) as part of the culture they are aiming for. I wonder why they didn’t apply the same thinking in McWilliam to the practice of institutional educational technology. e.g. McWilliam identifies Deadly Habit No. 5 as “Curriculum must be set in advance” which I see as tightly connected to one of the deadly habits of management. i.e. that the institutional vision must be set in advance.

Introduction

The intro is divided into the following named sections

  • The promise of learning analytics.
    A general summary of the potential of analytics. But ends with a couple of broader points
    • Institutions and senior administrators are “key users and stakeholders” with “enhancement of institutional decision-making processes and resource allocation as core objectives”.
    • And more interesting “the postmodern context of constant and dynamic change in higher education and technological innovation” increases the value of learning analytics as a tool for those folk to figure out actions that are (citing Kavanagh & Ashkanasy, 2006) “achievable within the capacity of the organisation to absorb change and resource constraints”.

    I’ve long argued/thought that if change is so core to the context, then the type of process that involves institutions and senior administrators (and the nature of the people themselves) are inappropriate. Using analytics to improve those existing processes reminds me of the Drucker quote “Management is doing things right; leadership is doing the right things”.

    There’s also the argument about whether or not the analysis of past practice is a useful guide for the future in a rapidly changing context.

  • The importance of strategic investment in learning technologies and e-learning.
    Makes the case for why strategy is important for learning quality e.g.

    In other words, decision-making processes relating to organization of
    institutional resources – human and material – and planning for more effective use of existing resources are a critical feature of excellent institutions.

    This is then linked to the idea of there being some known principles/practices that are “significant predictors of educational gain” – e.g. Chickering and Gamson’s (1987) 7 principles – and that ICTs and the LMS have been shown to do nice things.

    Then the point is made that the “teaching climate within higher education is becoming increasingly complex”. Student numbers and student diversity are increasing. Hence learning tools – like the LMS – are important and are hence institutional resources/concerns.

  • The catalyst for change
    Responsible institutions are being strategic, thinking about resource allocation, reviewing technologies etc. but “a further catalyst for a new LMS review was the LMS vendor’s announcement that the current LMS product would not be supported after 2013”.

    I wonder if in the absence of this “further catalyst” whether the strategic thinking process would have lead to need to change vendor? If not, is it really being strategic?

    Uses Kotter’s (1996) view of change and its first step which includes “careful examination of the current context…” to identify analytics as a way to understand the context and inform decision making.

  • Employing e-learning analytics to understake a current state analysis
    LMS reporting tools terrible, still coming. This institution worked with an analytics software copmany to get a reporting tool to look at use of current LMS and hence inform the strategic process and then

    Through participant observation in the review and planning process we were able to investigate the degree to which the e-learning intelligence revealed influenced institutional decision-making. (p. 151)

    Some of the data is shown, but the real kicker is

    More critically, we discuss the reality that the data developed in this e-learning analytics process did not significantly inform subsequent strategic decision-making and visioning processes, and consider some of the factors that may have limited its impact.

Approach and tools

Starts with explict mention of ethics, including various Canadian requirements. Data is limited to a single academic year (2009-2010) for credit courses. Describes the tools used in the analysis. Also describes the participant observation of the decision making around the LMS.

Selected findings

A selected summary

  • 18,809 course sections – 14,201 undergraduate.
  • 52,917 students enrolled.
  • 388 distance learning sections, 304 fully online, 84 print-based format.
  • 21% of course sections with an LMS site. 14% of lower level sections. 25% of upper-level sections.
  • 80.3% of students enrolled in at least one LMS supported course.
  • 61% of LMS sections were medium-sized sections (15-79 students). 22% for large (80+).
  • 30% of all teaching staff used the LMS (1,118 out of 3061, including all varieties).
  • User time online varied widely by role, faculty, department and course code.
  • In terms of user time online the order of tool use is
    • LMS Content page
    • Discussion
    • Organiser
    • Assessment

    Beyond these four all other tools used minimally. Content page almost 3 times larger than discussion.

  • File type of content investigated and graphed (majority image 41%, PDF 18%, HTML 16%)
    Wonder what the image files include (and don’t include). Lots of buttons and navigation images? Or actual “learning material”?
  • Draws on Dawson et al (2008) categorisation of tools by purpose: engagement with learning community, working with content, assessment, and administrative tasks.
  • This is used to explore and find more support for correlation between learning outcomes and their engagement with tools in fully online (emphasis in original) courses. Similar findings in courses with different use of LMS.

More interestingly

From review of these documents, and from participation in continuing committee discussions, we observed that although completion of the current state analysis was noted, no further references to or interpretations of the findings were made in later meetings or documentation. (p. 157)

Discussion and implications

Benchmarking

This type of analysis feeds benchmarking.

  • 2010 campus computing survey suggests US public universities average around 60% of course sections using the LMS.
    Of course this isn’t based on data, but perceptions of the people surveyed.
  • This institution is at 21%. 70% of teaching staff did not use it. But 80% of students take at least on LMS course.

The staff figure is reminiscent of the chasm. I wonder whether the low level of use of the LMS was considered in the LMS evaluation?

Makes the argument that for online courses, that the activity data “begin to provide lead indicators of the appropriateness of the course load as a result of the implemented learning activities”.

Though this seems to have some assumptions. e.g. that the current measure of learning outcomes (final grades) are an effective measure and then….The point is made that educational theorists “emphasize the importance of peer to peer interaction for facilitating the learning process” but that most of what students are doing in the LMS is content delivery.

This practice is linked to the “old wine in new bottles” use of technology and “It is only at this later innovation stage that learning technologies will be fully utilized to support a pedagogical practice of engagement that will significantly enhance the overall student learning experience”. I wonder what the impact on achieving the “later innovation stage” is provided by institutions changing LMS every few years?

Suggests reaching this stage requires the “kind of cultural changes described by McWilliam (2005)”.

Informing strategic planning?

If the committee is the main group charged with integrating IT and learning and teaching, why didn’t it engage with what the data revealed?

The answer in the paper, not surprisingly given the above, is based on a teleological set of assumptions.

The paper reports “that subsequent deliberations and decision-making focused almost exclusively on technical questions relating to ‘ease of migration.'” and more

While there is an obvious imperative to ensure that any new enterprise technology is functional, scalable and reliable, an exclusive focus on technology integration issues, in the absence of development of a pedagogical vision, quickly neutralizes the likelihood that learning analytics data may catalyze organizational change with a focus on the student experience and learning outcomes. A focus on technological issues merely generates “urgency” around technical systems and integration concerns, and fails to address the complexities and challenges of institutional culture and change. (p. 159)

The suggestion is that

What will determine whether it succeeds or fails in this effort will be its ability to develop a clear vision for learning technologies and lead the cultural change that reaching it requires.

The problem I have with this is that the effort becomes focused on establishing what the clear vision should be. i.e. the stakeholders – who tend to be inherently diverse, include potential political rivals, and those focused on purpose proxies (e.g. the above focus on integrating with existing technical systems rather than quality L&T) – waste time trying to get agreement on a vision which they then have to communicate and gain acceptance for from the broader and even more diverse potential user base.

Interesting, there are quotes an argument that perhaps learning analytics isn’t enough

Interestingly, this mismatch between opportunity and implementation may be more widespread than enthusiastic analytics literature suggests. In their 2005 review of 380 institutions that had successfully implemented analytics, Goldstein & Katz (2005) note that analytics approaches have overwhelmingly been employed thus far “to identify students who are the strongest prospects for admission…[and]…to identify students who may be at risk academically” – that is, to improve enrollment and retention, rather than for institutional strategic planning. Similarly a recent survey of literature on implementation of educational data mining found that only a small minority of these report on the application of EDM to institutional planning processes (Romero & Ventura, 2010).

Why numbers aren’t enough

We suggest here that this may be the result of lack of attention to institutional culture within higher education, lack of understanding of the degree to which individuals and cultures resist innovation and change, and lack of understanding of approaches to motivating social and cultural change.

There is now a move into difusion theory as a tool to explain (can I hear @cj13 starting to go off?). Used to frame a few paragraphs about why analytics reports on LMS usage data is likely to have limited impact on those involved?

Broader discussion about the “realities of university culture”. There is some mention of the disconnect between the business change management literature “heavy emphasis on the role of leaders in motivating and managing successful change and innovation” and the reality of university culture/life where “any direct interference in faculty democracy is not welcome”.

Where to from here?

The data isn’t enough. Change literature cited suggesting that the heart and head needs to be engaged.

Still claims that using the data to highlight process/room for growth against targets and vision is useful. But makes the important point

Interpretation remains critical. Data capture, collation and analysis mechanisms are becoming increasingly sophisticated, drawing on a diversity of student and faculty systems. Interpretation and meaning-making, however, are contingent upon a sound understanding of the specific institutional context. As the field of learning analytics continues to evolve we must be cognizant of the necessity for ensuring that any data analysis is overlaid with informed and contextualized interpretations.

But I wonder, if given the inherent irrationality of human decision makers, whether “informed and contextualised” is enough/achievable?

An interesting suggestion

In addition, we propose that greater attention is needed to the accessibility and presentation of analytics processes and findings so that learning analytics discoveries also have the capacity to surprise and compel, and thus motivate behavioural change.

That sounds like a good design-based research project.

More here also on the difficulty for non-experts to understand what is being shown.

I like this closing “research must also delve into the socio-technical sphere to ensure that learning analytics data are” but not so much its application to strategic institutional planning.

The Texas sharpshooter fallacy and other issues for learning analytics

Becoming somewhat cynical about the headlong rush toward learning analytics I’m commencing an exploration of the problems associated with big data, data science and some of the other areas which form the foundation for learning analytics. The following is an ad hoc collection of some initial resources I’ve found and need to engage with.

Feel free to suggest some more.

The Texas sharpshooter fallacy

This particular fallacy gets a guernsey mainly because of the impact of its metaphoric title. From the Wikipedia page

The Texas sharpshooter fallacy often arises when a person has a large amount of data at their disposal, but only focuses on a small subset of that data. Random chance may give all the elements in that subset some kind of common property (or pair of common properties, when arguing for correlation). If the person fails to account for the likelihood of finding some subset in the large data with some common property strictly by chance alone, that person is likely committing a Texas Sharpshooter fallacy.

Critical questions for big data

Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679.

Abstract

The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Given the rise of Big Data as a socio-technical phenomenon, we argue that it is necessary to critically interrogate its assumptions and biases. In this article, we offer six provocations to spark conversations about the issues of Big Data: a cultural, technological, and scholarly phenomenon mythology that provokes extensive utopian and dystopian rhetoric.

The headings give a good idea of the provocations:

  • Big data changes the definition of knowledge.
  • Claims to objectivity and accuracy are misleading.
  • Bigger data are not always better data.
  • Taken out of context, Big data loses its meaning.
  • Just because it is accessible does not make it ethical.
  • Limited access to big data creates new digital divides.

Effects of big data analytics on organisations’ value creation

Mouthaan, N. (2012). Effects of big data analytics on organizations’ value creation. University of Amsterdam.

A Masters’ thesis, that amongst other things is

arguing that big data analytics might create value in two ways: by improving transaction efficiency and by supporting innovation, leading to new or improved products and services

and

this study also shows that big data analytics is indeed a hype created by
both potential users and suppliers and that many organizations are still experimenting with its implications as it is a new and relatively unexplored topic, both in scientific and organizational fields.

The promise and peril of big data

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Some good discussion of issues reported by a rappoteur, issues included.

  • How to make sense of big data?
    • Data correlation or scientific methods – Chris Anderson’s “Data deluge makes the scientific method obsolete” and responses. e.g. “MY TiVO thinks I’m gay”, gaming, the advantage of theory/deduction etc.
    • How should theories be crafted in the an age of big data?
    • Visualisation as a sense-making tool.
    • Bias-free interpretation of big data.

      Cleaning data requires decisions about what to ignore. Problem increased when data comes from different sources. Quote “One man’s noise is another man’s data”

    • Is more actually less?
      Does it yield new insights or create confusion and false confidence. “Big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge”.
    • Correlations, causality and strategic decision making.
  • Business and social implications of big data
    • Social perils posed by big data
  • How should big data abuses be addressed?
  • Research ethics in emerging forms of online learning

    Esposito, A. (2012). Research ethics in emerging forms of online learning: issues arising from a hypothetical study on a MOOC. Electronic Journal of e-Learning, 10(3), 315–325.

    Will hopefully give some initial insights into the thorny issue of ethics.

    Data science and prediction

    Dhar, V. (2012). Data Science and Prediction. Available at SSRN. New York City.

    Appears to be slightly more “boosterish” than some of the other papers.

    Abstract

    The world’s data is growing more than 40% annually. Coupled with exponentially growing computing horsepower, this provides us with unprecedented basis for ‘learning’ useful things from the data through statistical induction without material human intervention and acting on them. Philosophers have long debated the merits and demerits of induction as a scientific method, the latter being that conclusions are not guaranteed to be certain and that multiple and numerous models can be conjured to explain the observed data. I propose that ‘big data’ brings a new and important perspective to these problems in that it greatly ameliorates historical concerns about induction, especially if our primary objective is prediction as opposed to causal model identification. Equally significantly, it propels us into an era of automated decision making, where computers will make the bulk of decisions because it is infeasible or more costly for humans to do so. In this paper, I describe how scale, integration and most importantly, prediction will be distinguishing hallmarks in this coming era of Data Science.’ In this brief monograph, I define this newly emerging field from business and research perspectives.

    Codes and codings in crisis: Signification, performativity and excess

    Mackenzie, A., & Vurdubakis, T. (2011). Codes and Codings in Crisis: Signification, Performativity and Excess. Theory, Culture & Society, 28(6), 3–23.

Three likely paths for learning analytics and academics

The following is an early attempt to write and share some thoughts on what, why and with what impacts Australian universities are going to engage with learning analytics over the next couple of years. Currently it’s fairly generic and the same structure could be used with any fad or change process.

You could read the next section, but it’s basically an argument as to why its important to consider the how learning analytics will impact academics. The three likely paths section describes the paths.

Context and rationale

By all indications learning analytics is one of the next big things in university learning and teaching. Ferguson (2012) identifies learning analytics as one of the fastest-growing areas of research within technology-enhanced learning with interest being driven by a combination of technological, pedagogical, political and economic drivers. The 2012 Horizon Report (Johnson & Cummins, 2012) argues that while learning analytics is still in its early stages of development it is likely to see widespread adoption within the higher education sector in the next 2-3 years. The recent Horizon technology outlook for Australian Tertiary Education (Johnson, Adams, & Cummins, 2012) places learning analytics – for the first time anywhere in the world – into the “one year or less” time-frame for adoption. Given what I hear around the traps, it would appear that every single Australian university is doing something (or thinking about it) around learning analytics.

My interest is in how these plans are going to impact upon academics and their pedagogical practice. It’s a fairly narrow view, but an interesting, self-serving and possibly important one. Johnson & Cummins (2012) suggestion that the larger promise of learning analytics is when it is used “to more precisely understand students’ learning needs and to tailor instruction appropriately far more accurately and far sooner than is possible today” (p. 23). I don’t think automated-tutoring information systems are going to be up to that task anytime soon. At least not across a broad cross-section of what is taught at Universities. So academics/teachers/instructors will be involved in someway.

But we don’t know much about this and it appears to be difficult. Dawson et al. (2011) make the observation of “a dearth of studies that have investigated the relationship between learning analytics and data requirements that would better assist teachers in the design and evaluation of learning and teaching practice” (p. 4). Not only that, it has been found that being able to interpret the patterns provided by learning analytics and apply them to practice is difficult, time-consuming and requires additional support (Dawson et al., 2011; Dawson & McWilliam, 2008). So I wonder how and with what impacts the almost inevitable strategies adopted by Australian Universities will help with this.

Not surprisingly, I am not optimistic. So I’m trying to develop some sort of framework to help think about the different likely paths they might adopt, the perspectives which underpin these paths, and what the likely positives, problems and outcomes might be of those paths.

The three likely paths

For the moment, I’ve identified three likely paths which I’ve labelled as

  1. Do it to the academics.
  2. Do it for the academics.
  3. Do it with the academics.

There are probably other paths (e.g. do nothing, ignore the academics) that might be adopted, but I feel these are probably the most likely.

These are listed in order of which I think are most likely to happen. There may be examples of path #3 spread throughout institutions, but I fear they will be few and far between.

Currently, it’s my theory that organisations probably need to travel all three paths. The trouble is that the 3rd path will probably be ignored and this will reduce the impact and adoption of learning analytics.

The eventual plan is to compare and contrast these different paths by the different assumptions or perspectives on which they are based. The following gives a quick explanation of each of the paths and an initial start on this analysis.

Those of you who know me, can probably see some correspondence between these three paths and the 3 levels of improving teaching. There is a definite preference in the following for the 3rd path, this is not to suggest that it should (or can) be the only path explored, or that the other paths have no value. All three have there part to play, but I think it would be wrong if the 3rd path was ignored.

Perhaps that’s the point here, to highlight the need for the 3rd path to complement the limitations of the other two. Not to mention helping surface some of the limitations of the other two so they can be appropriately addressed.

Some questions for your

  • Is there any value in this analysis?
  • What perspectives/insights/theories do I need to look at to better inform this?
  • What might be some useful analysis lens for the three paths?
  • Are there other paths?
  • What am I missing?

Do it to the academics

It seems a fair bit of interest in learning analytics is being driven by non-teaching folk. Student administration and IT folk amongst the foremost with senior management in there somewhere as well. Long and Siemens (2001) define this level as academic analytics rather than learning analytics. But I believe it belongs here because of the likelihood that if senior managers use academic analytics to make decisions, that some of the decisions they make will have an impact on academics (i.e. do it to them).

I can see this path leading to outcomes like

  • Implementation of a data warehouse, various dashboards and reports.
  • Some of these may be used to make data-driven decisions.
  • The implementation of various strategies such as “at-risk” processes that are done independently of academics.
  • At it’s worst, the creation of various policies or processes that require courses to meet certain targets or adopt certain practices (e.g. the worst type of “common course site policy), i.e. performativity.

In terms of analysing/characterising this type of approach, you might suggest

  • Will tend to actually be academic analytics, rather than learning analytics (as defined by Long and Siemens, 2011) but may get down to learning analytics at the departmental level.
  • It’s based on a “If I tell them to do it, they will…” assumption.
    i.e. what is written in the policy is what the academics will actually do.
  • A tendency to result in task corruption and apparent compliance.
  • It assumes academics will change their teaching practice based on what you told them to do.
  • Is based on the assumptions of teleological processes.
    i.e. the system is stable and predictable, the designers can manipulate the system’s behaviour, and the designer’s can determine the goals/criteria for success.
  • It is located a long way from the actual context of learning and teaching and assumes that big data sets and data mining algorithms will enable the identification of useful information that can guide decision making.
  • It does not recognise the diversity inherent in teaching/teachers and learning/learners.
    Assumes learning is like sleeping
  • It is based on the assumption of senior managers (of people in general) as rational decision makers, if only they had the right data.
  • What is actually done will rely heavily on which vendor gets chosen to implement.
  • Will be largely limited to the data that is already in the system(s).

Do it for the academic

There are possibly two sub-paths within this path

  1. The researcher path.
    Interested researchers develop theoretically-informed, research-based approaches to how learning analytics can be used by academics to improve what they do. They are developing methods for the use of academics.
  2. The support division path.
    This is where the Information Technology or Learning and Teaching support division of the university note the current buzz-word (learning analytics) and implement a tool, some staff development etc to enable academics to harness the buzz-word.

In terms of analysing/characterising this approach, I might identify

  • It’s based on the “If I build it, they will come” assumption.
  • It assumes you can improve/change teaching by providing new tools and awareness.
  • Which generally hasn’t worked for a range of reasons including perhaps the chasm
    i.e. the small number of early adopter academics engage, the vast majority don’t.
  • It assumes some level of commonality in teaching/teachers and learning/learners.
    At least at some level, as it assumes implementing a particular tool or approach may be applicable across the organisation. Assumes learning is perhaps more like eating?
  • It assumes that the researchers or the support division have sufficient insight to develop something appropriate.
  • It assumes we know enough about learning analytics and helping academics use learning analytics to inform pedagogical practice to enshrine practice around a particular set of tools.
  • Is based on the assumptions of teleological processes.
    i.e. the system is stable and predictable, the designers can manipulate the system’s behaviour, and the designer’s can determine the goals/criteria for success.
  • It will be constrained by the institutions existing systems and the support divisions’ people and their connections.
  • The support division path can be heavily influenced by the perspective of the academic (or others) as a client/customer which assumes that the client/customer knows what they want and because they generally don’t often sinks to a process of “manage the customer” rather than help.

Do it with the academic

In this path the application of learning analytics is treated as something that needs to be learned about. Folk work with the academics to explore how learning analytics can be best used to inform individual pedagogical practice. Perhaps drawing on insights from the other paths, but also modifying the other paths based on what is learned.

In terms of analysing/characterising this approach, I might identify

  • It assumes that if you want to change/improve teaching, then the academics need to learn and be helped to learn.
    (That probably sounds more condescending than I would like).
  • Based on a “If they learn it, they will do it” premise.
    Which doesn’t have to be true.
  • It assumes learning/learners and teaching/teachers are incredibly diverse.
  • It assumes we don’t know enough about what might be found with learning analytics and how we might learn how to use it.
  • Assumption that the system(s) in place will change in response to this learning which in turn means more learning ….and the cycle continues.

References

Dawson, S., Bakharia, A., Lockyer, L., & Heathcote, E. (2011). “Seeing” networks : visualising and evaluating student learning networks Final Report 2011. Main. Canberra.

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council.

Ferguson, R. (2012). The State of Learning Analytics in 2012 : A Review and Future Challenges a review and future challenges. Milton Keynes: UK.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis. New Media Consortium. Austin, Texas.

Johnson, L., & Cummins, M. (2012). The NMC Horizon Report: 2012 Higher Education Edition (p. 42). Austin, Texas.

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5).

Powered by WordPress & Theme by Anders Norén

css.php