Assembling the heterogeneous elements for (digital) learning

Month: May 2018

Random meandering notes on “digital” and the fourth industrial revolution

In the absence of an established workflow for curating thoughts and resources I am using this blog post to save links to some resources. It’s also being used as an initial attempt to write down some thoughts on these resources and beyond. All very rough.

Fourth industrial revolution

This from the world economic forum (authored by Klaus Schwab, ahh, who is author of two books on shaping the fourth industrial revolution) aims to explain “The Fourth Industrial Revolution: what it means, how to respond”. If offers the following description of the “generations” of revolution

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

Immediate reaction to that is that the 3rd revolution – with its focus on electronics and information technology – missed a trick with digital technology. It didn’t understand and leverage the nature of digital technologies sufficiently. In part, this was due to the limited nature of the available digital technology, but also perhaps due to the failure of a connection between the folk who really knew this and the folk trying to do stuff with digital technology.

The WEF post argues that “velocity, scope and systems impact” are why this fourth generation is distinct from the third. They could be right, but again I wonder if ignorance of the nature of digital technology might be a factor?

The WEF argues about the rapid pace of change and how everything is being disrupted. Which brings to mind arguments from Audrey Watters (and I assume others) about how, actually, it’s not all that rapid.

It identifies the possibility/likelihood of inequality. Proposes that the largest benefits of this new revolution (as with others?) accrues to the “providers of intellectual and physical capital – the innovators, shareholders and investors”.

Points to disquiet caused by social media and says more than 30% of the population accesses social media. However, the current social media is largely flawed and ill-designed, it can be done better.

Question: does an understanding of the nature of digital technology help (or is it even required) for that notion of “better”? Can’t be an explanation for all of it, but some? Perhaps the idea is not that you need only to truly know the nature of digital technology, or know only the details of the better learning, business, etc you want to create. You need to know both (with a healthy critical perspective) and be able to fruitfully combine.

Overall, much of this appears to be standard Harvard MBA/Business school like.

The platform economy – technology-enabled platforms – get a mention which also gets a mention in the nascent nature of digital technology stuff I worked on a couple of years ago. Platforms are something the critical perspective has examined, so I wonder if this belongs in the NoDT stuff?

Links to learning etc.

I came to this idea from this post from a principal come consultant/researcher around leading in schools. It’s a post that references this on building the perfect 21st Century worker as apparently captured in the following infographic.

Which includes the obligatory Digital skills which are listed in article as (emphasis added)

  • Basic digital literacy – ability to use computers and Internet for common tasks, like emailing
  • Workplace technology – using technologies required by the job
  • Digital learning – using software or online tools to learn new skills or information
  • Confidence and facility learning and using new technologies
  • Determining trustworthiness of online information

Talk about setting the bar low and providing a horrendous example of digital literacy, but then it does tend to capture the standard nature of most attempts at digital literacy I’ve seen, including:

  • A focus on using technology as is, rather than being able to renovate and manipulate it.
  • Revealing a ignorance of basic understanding. e.g. “software or online tools”, aren’t the “online tools” also software?
  • Continuing the medium divide, i.e. online information or online tools are somehow different from all the other information and tools I use?

(Not to mention that the article uses an image to display the bulleted list above, not text)

Software engineering for computational science : past, present, future

Following is a summary of Johanson and Hasselbring (2018) and an exploration of what, if anything, it might suggest for learning design and learning analytics. Johanson and Hasselbring (2018) explore why scientists whom have been developing software to do science (computational science) haven’t been using principles and practices from software engineering to develop this software. The idea is that such an understanding will help frame advice for how computational science can be improved through application of appropriate software engineering practice (**assumption**).

This is interesting because of potential similarities between learning analytics (and perhaps even learning design in our now digitally rich learning environments) and computational science. Subsequently, lessons about if and how computational science has/hasn’t been using software engineering principles might provide useful insights for the implementation of learning analytics and the support of learning design. I’m especially interested due to my observation that both practice and research around learning analytics implementation isn’t necessarily exploring all of the possibilities.

In particular, Johanson and Hasselbring (2018) argue that it is necessary to examine the nature of computational science and subsequently select and adapt software engineering techniques that are better suited to the needs of computational scientists. For me, this generates questions such as:

  1. What is the nature of learning analytics?
  2. What is the nature of learning design?
  3. What happens to the combination of both?

    Increasingly it is seen as necessary that learning analytics be tightly aligned with learning design. Is the nature/outcome/practice of this combination different? Does it require different types of support?

  4. For all of the above is there a difference between the nature espoused in the research literature and the nature experienced by the majority of practitioners?
  5. What types and combination of software engineering/development principles and practices are best suited to the nature of learning analytics and learning design?

Summary of the paper

  • Question of problem

    The development of software with which to do science is increasing, but this practice isn’t using software engineering practices. Why? What are the underlying causes? How can it be changed?

  • Method

    Survey of relevant literature examining software development in computational science. About 50 publications examined. A majority case studies, but some surveys.

  • Findings

    Identify 13 key characteristics (divided into 3 groups) of computational science that should be considered (see table below) when thinking about which software engineering knowledge might apply and be adapted.

    Examines some examples of how software engineering principles might be/are being adapted.

Implications for learning analytics

Johanson and Hasselbring (2018) argue that the chasm between computational scientists and software engineering researchers arose from the rush on the part of computer scientists and then software engineers to avoid the “stigma of all things applied”. The search for general principles that applied in all places. Leading to this problem

Because of this ideal of generality, the question of how specifically computational scientists should develop their software in a well-engineered way, would probably have perplexed a software engineer and the answer might have been: “Well, just like any other application software.

In learning analytics there are people offering more LA specific advice. For example, Wise & Vytasek (2017) and just this morning via Twitter this pre-print of looming BJET article. Both focused on providing advice that links learning analytics and learning design.

But I wonder if this is the only way to look at learning analytics? What about learning analytics for reflection and exploration? Does the learning design perspective cover it?

But perhaps a more interesting question might be whether or not it is assumed that the learning analytics/learning design principles identified by these authors should then be implemented using traditional software engineering practices?

Category

Characteristics

Nature of scientific challenges

  1. Requirements are not known up front
    • uses software to make novel discoveries and further
      understanding, software is “deeply embedded” in an
      exploratory process

    • to produce software but to obtain scientific results”. Segal
      (2005) scientific people say they are “programming
      experimentally”

    • design
      and requirements rarely seen as distinct steps

  2. Verification and validation is difficult and
    strictly scientific

    • verification
      demonstrate that the implementation of models is correct

    • validation
      demonstrate software captures the real world

    • Validation is hard
      because models are being used “precisely because the subject
      at hand is ‘too complex, too large, t
      oo
      small, too dangerous, or too expensive to explore in the real
      world’ (Segal and Morris, 2008)

    • Problems arise from four
      different dimensions/combinations (Carver et al, 2007)

      • Model of reality is
        insufficient

      • Algorithm used to
        discretise the mathematical problem can be inadquate

      • Implementation of the
        algorithm is wrong

      • Combination of models can
        propagate errors

    • Testing methods could help,
      but rarely used

  1. Overly formal software processes restrict research

    • Easterbrook and Johns (2009) big up front design “poor
      fit” for computational science – deeply embedded in the
      scientifical model

    • there is a need for the flexibility to quickly
      experiment with different solution approaches (Carver et al,
      2007)

    • Use a very iterative process, iterating over both the
      software and the underlying scientific theory

    • Explicit connections with agile software development
      established in the literature but even those lightweight
      processes are largely rejected

Representation shown in figure

Limitations of computers

  1. Development is driven and limited by hardware

    • scientific software not limited by the science theory, but by the available computing resources

    • Computationl power is an issue

  2. Use of “old” programming languages and
    technologies
    • Some communities moving
      toward python, but typically non-technical disciplines
      (biology/psychology) and only for small scale projects

  1. Intermingling of domain logica and implementation
    details

  2. Conflicting software quality requirements
    (performance, portability and maintainability)

    • interviews of scientific
      developers rank requirements as

      • Functional correctness

      • Performance

      • Portability

Maintainability

Cultural environment

  1. Few scientists are trained in software engineering

    • Segal (2007) describe them
      am “professional end user developers”…develop software to
      advance their own professional goals

    • “In contrast to most
      conventional end user developers, however, computational
      scientists rarely experience any difficulties learning
      general-purpose languages”

    • But keeping up with sw eng
      is just too much for people who are already busy writing grants
      etc.

    • Didn’t want to delegate
      development as often required a PhD in the discipline to be
      able to understand and implement the softare

  1. Different terminology

    • e.g. computational
      scientists speak of “code” not “software”

  1. Scientific software in itself has no value but still
    it is long-lived

    • Code is valued because of
      the domain knowledge captured within it

  1. Creating a shared understanding of a “code” is
    difficult

    • preference for informal,
      collegial ways of knowledge transfer, not documentation

    • “scientists find it
      harder to read and understand documentation artifacts than to
      contact the author and discuss”

  1. Little code re-use

Disregard of most modern software engineering methods

A model of scientific software development

Johanson and Hasselbring (2018) include the following figure as a representation of how scientific software is developed. They note its connections with agile software development, but also describe how computational scientists find even the light weight discipline of agile software development as not a good fit.

Model of Scientific Software Development

Anecdotally, I’d suggest that the above representation would offer a good description of much of the “learning design” undertaken in universities. Though with some replacements (e.g. “develop piece of software” replaced with “develop learning resource/experience/event”).

If this is the case, then how well does the software engineering approach to the development and implementation of learning analytics (whether it follows the old SDLC or agile practices) fit with this nature of learning design?

References

Johanson, A., & Hasselbring, W. (2018). Software Engineering for Computational Science: Past, Present, Future. Computing in Science & Engineering. https://doi.org/10.1109/MCSE.2018.108162940

Wise, A., & Vytasek, J. (2017). Learning Analytics Implementation Design. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 151–160). Alberta, Canada: Society for Learning Analytics Research (SoLAR). Retrieved from http://solaresearch.org/hla-17/hla17-chapter1

Powered by WordPress & Theme by Anders Norén

css.php