Assembling the heterogeneous elements for (digital) learning

Month: October 2017

Introducing the Moodle Activity Viewer (MAV) & digital reno

What follows are the resources associated with a workshop being run at the University of Southern Queensland. As the title suggests, the aim is to get USQ folk started using the Moodle Activity Viewer to explore usage of Moodle activities and resources, and to briefly introduce the idea of digital renovation.

Apart from the presentation slides and references below, other related resources include:

  • Instructions for installing the MAV for USQ staff.

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

  • Additional details on other USQ digital reno tools

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

Slides

References

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in Learning Technology, 21, 1–13. https://doi.org/10.3402/rlt.v21i0.19909

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics

David Jones, Hazel Jones, Colin Beer, Celeste Lawson, Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics, To appear in the proceedings of the 2017 Australian Learning Analytics Summer Institute (ALASI 2017)

Abstract

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Introduction

Learning analytics has been receiving attention since the late noughties. The promise of data driven decision making and the nature of the higher education environment – decreasing funding, increasing focus on quality, increasing use of technology enhanced learning (TEL) – is seen as making the institutional adoption of learning analytics an imperative for institutions of higher education (Macfadyen, Dawson, Pardo, & Gasevic, 2014, p. 17). By 2017, there appears to have been sufficient time and resources invested to realise the affordances learning analytics offers to education at the whole-of-institution scale (Colvin, Dawson, Wade, & Gašević, 2017), especially given predictions in 2012 that it was one year away from mainstream adoption within the Australian Higher Education sector (Johnson, Adams, & Cummins, 2012). However, there are only a small number of institutions that have demonstrated impact on learning and teaching outcomes through large-scale learning analytics programs (Ferguson, Clow, et al., 2014) and there are concerns that there remains limited evidence of the effectiveness of learning analytics at scale, or sufficient understanding to guide successful implementation (Colvin et al., 2017; Ferguson, Macfadyen, et al., 2014).

To address this concern there is a growing conceptual literature offering various models and frameworks to guide learning analytics adoption. Colvin et. al (2017) categorise and analyse this literature and argue that “while the models afford insight, they do not capture the breadth of factors that shape LA implementations” (p. 284). As a result these models are unable to provide those responsible for institutional implementation of learning analytics “the nuanced, situated, fine-grained insight they require to guide them through learning analytics implementation” (Colvin et al., 2017, p. 284). Such a restriction could be addressed through empirical research that examines the “burgeoning, albeit nascent implementations found across higher education institutions” (Colvin et al., 2017, p. 285). Research by Colvin et al (2015) offers one valuable contribution, however, there are limitations. One such limitation is the focus on the perspectives from one set of participants involved in learning analytics projects: senior leaders charged with responsibility for implementation. While an important source of insight, this focus perhaps echoes the lack of human-centeredness that pervades learning analytics implementation (Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017) and tends “to privilege the administrator rather than the student – or even the instructor” (Kruse & Pongsajapan, 2012, p. 4). This limitation raises questions such as:

What is the experience of students and teachers using institutional learning analytics? How might an understanding of their experience inform the institutional implementation of learning analytics?

It is these questions that this paper seeks to explore, with a particular focus on the experience of teaching staff. To do this, it describes a single teacher’s experience developing and using a do-it-yourself (DIY) approach to learning analytics. The paper starts by describing this approach and then draws from it three implications and three questions for institutional implementation of learning analytics.

Know thy student

During 2015 and 2016 one of the authors developed and used a DIY learning analytics tool (Know thy student) within a third-year Bachelor of Education course. Offered twice a year, the course had an annual enrolment of 400+ students. Two-thirds of these students studied via online only, and less than 15% were ever likely meet the course examiner in person. The design of the course focused explicitly on making significant use of a Moodle course site and sought to encourage: significant active student online engagement; formative assessment; student reflection via individual blogs; and, use of social bookmarking. Know thy student was developed to address limitations in existing institutional systems and enable more meaningful responses to student queries. The tool was inspired by and built on top of the Moodle Activity Viewer (MAV) developed at CQUniversity (Jones and Clark, 2014). While the tool interacted with, and extracted information from a number of institutional systems, it could only be used via the implementer’s laptop to interact with the specific course site.

When in use, Know thy student modified every page of the course site viewed by the teacher. It added a [details] link where ever a link to a user profile appeared, as illustrated in Figure 1.

Forum post + more student details

Figure 1 – Modified course page

Clicking on one of the [details] link would open a new pop up window (Figure 2) to provide access to information about the student. The pop-up window provided information in three separate tabs, including: personal details (Figure 2); activity completion (Figure 3); and, blog posts (Figure 4). Know thy student provided the examiner with ubiquitous and embedded access to course specific information about each student enrolled in the course.

Student background

Figure 2 – Personal details

Across four offerings of the course in 2015 and 2016 the teacher used the tool 3,100 separate times to access information on 761 different students. Representing 89.5% of the enrolled students. For one student, the tool was used 32 separate times. The median number of uses per student was three.

Initially, most of this use was generated when answering student questions on course discussion forums. However, the embedded and ubiquitous availability of the [details] link enabled other unplanned uses. For example, the course home page provided a list of all course participants who had been recently logged into the course site. As designed, Know thy students would add a [details] link to this list. This modification to the learning environment encouraged the development of a practice where the teacher would use that link to proactively learn more about students. In turn, this led to an increase in engaging with students via their blog posts and other means. Since the tool was simple and easily within grasp it provided a platform that encouraged more meaningful and unexpected connections with hundreds of students.

Implications and questions for learning analytics implementation

Analysis and discussion about the case have led the authors to suggest three implications about and three questions for the institutional implementation of learning analytics. Given the exploratory nature of this research there are tenative suggestions and each implication and question in turn generates additional questions for further investigation.

Implication #1: Institutional learning analytics currently falls short of an important goal

Baker (2016) identifies a common goal shared by learning analytics systems, that “of getting key information to a human being who can use it” (p. 607). This case shows that at least one institution’s approach to learning analytics is falling short of this goal, and there are indications that this problem is not limited to a single institution. Almost 10 years ago, Dawson & McWilliam (2008) comment on how poor the LMS data aggregation and visualisation tools of the day were in helping academics understand student learning behaviour. In 2013, focus groups of academics from the University of Melbourne identified a common need to be better able to correlate from different institutional systems (Corrin, Kennedy, & Mulder, 2013). A recent unpublished experiment at another institution by one of the co-authors of this paper identified that gathering relevant information for ten post-graduate students took over an hour and required the use of five separate information systems owned by three separate institutional departments. This reinforces the observation from Liu (2017) that academics “rarely have the data that they actually want in a place and form where it can actually be used”.

How widespread is this apparent failure? What are the factors contributing to this apparent failure? What can be done to address it?

Student activity completion

Figure 3 – Activity completion

Implication #2: Embedded, ubiquitous, contextual learning analytics enable emergent practice

Experience from this case suggests that providing useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable unplanned and effective interventions. In this case, being able to access student and course specific information throughout the learning environment enabled the teacher to adopt the unplanned practice of proactively connecting with students. Arguably, this may fit with characterisations of teachers as bricoleurs focused on making do with and creatively repurposing the tools that are at hand (Hatton, 1989). Providing contextually appropriate tools, however, is difficult given the sheer diversity involved in education where “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029).

Does the provision of embedded, ubiquitous and contextual learning analytics increase and encourage greater adoption and bricolage by teachers with learning analytics? What impact would that have on the learning experience? Given the inherent diversity in education, how can institutional learning analytics provide contextually appropriate learning analytics?

Sentiment analysis of blog posts

Figure 4 – Sentiment analysis of blog posts

Implication #3: Teacher DIY learning analytics is possible

This case shows that technically literate academics are able to leverage available technologies to implement and use teacher DIY learning analytics. The notion of end-user development is not new with “[m]ost programs today … written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support” (Ko et al., 2011, p. 21). Such work can be seen as undesirable due to concerns about inefficiency, error, support, scalability, privacy and security. However, it can also address limitations and flaws in provided systems (Koopman & Hoffman, 2003).

How is DIY learning analytics viewed in relation to the institutional implementation of learning analytics? Is it something to be prevented, or enabled and encouraged? Given technology trends, can it be prevented?

Question #1: Does institutional learning analytics have an incomplete focus?

The common response to seeing the Know thy student tool is to ask if and how it can be reused in other courses. Such a response aims to understand if and how this particular learning analytics tool can “make the leap from the focused and particular to the broad and general” (Lonn et al., 2013, p. 235). This echoes what is seen as the core goal for most learning analytics project “to move from small-scale research towards broader institutional implementation” (Ferguson, Macfadyen, et al., 2014, p. 120). However, if “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029), then how can a broad and general focus effectively respond to diverse contextual requirements? How can the institutional implementation of learning analytics address concerns that it is focused at an “institutional scale rather than a human scale” (Kruse & Pongsajapan, 2012)? Should and can its focus be expanded to include both the human and institutional scale?

Question #2: Does the institutional implementation of learning analytics have an indefinite postponement problem?

In seeking to move learning analytics beyond a research project to institutional scale Lonn et al ( 2013) partnered with a university’s Information Technology (IT) service. A first step in their project involved the IT service performing a feasibility of the project and placing “it in their timeline of priorities” (p. 236) and subsequently the project “was delayed due to existing projects … that were a higher priority for the institution” (Lonn et al., 2013, p. 238). Given the typical prioritisation scheme used by a university IT service, a tool like Know thy student which focuses on a need from a single course is unlikely to ever be of sufficient priority to be actioned at the institutional level. It will be indefinitely postponed.

Would learning analytics that are specific to the learning designs within a single course ever be implemented by institutional IT? Would such a project be indefinitely postponed? What impact does this have on the institutional implementation of learning analytics? Should and can this problem be addressed?

Question #3: If and how do we enable teacher DIY learning analytics?

The above has suggested that teacher (and perhaps student) DIY learning analytics may make a useful contribution to institutional learning analytics implementation. However, there are numerous significant questions around if and how it can be achieved, including: whether or not it can be integrated sustainably into institutional implementation. and whether or not teaching staff have sufficient data and technical literacy to effective contribute?

In terms of institutional implementation, Colvin et al (2017) provide recommendations necessary for sustainable learning analytics adoption that could offer useful guidance. In addition, there are projects like that described by Liu et al (2017) that are actively using such recommendations to support a level of teacher DIY learning analytics. The challenge is that enabling and encouraging teacher DIY learning analytics appears to represent a mindset that is incommensurable with the assumptions underpinning the majority of contemporary institutional practices (Jones & Clark, 2014). There is also research suggesting that the convergent and generative characteristics of pervasive digital technology requires the development of radically different approaches to corporate IT infrastructures and organisational strategic frameworks (Yoo, Boland, Lyytinen, & Majchrzak, 2012).

The low digital fluency of teaching staff has been identified as a significant challenge impeding the adoption of digital technology within higher education (Johnson, Adams Becker, Estrada, & Freeman, 2014). If low digital fluency is challenging the effective use of digital technologies by teaching staff, then it does raise questions about the likelihood of teacher DIY learning analytics. However, research in end user development suggests that such DIY practices are already happening and that such practices have positive impacts on the quantity and quality of adoption of digital technologies (Ko et al., 2011; Koopman & Hoffman, 2003). Finally, Scanlon et al (2013) observes that the complexity of technology-enhanced learning – such as learning analytics – means that accepting “’user-driven’ contributions from both teachers and students” (p. 34) may be necessary “to allow for effective intervention” and in order to understand the complexity of practices that is the “context for any particular TEL innovation” (p. 34).

Conclusion

This paper has briefly described a single case of teacher DIY learning analytics, which raises a number of implications and questions for the institutional implementation of learning analytics. It is suggested that empirical research moving beyond those in charge of the institutional implementation of learning analytics to those living with such systems can deepen the understanding of current experience with such systems and subsequently contribute improvements. From this case it appears that current approaches are failing to meet a potentially important goal of “getting key information to a human being who can use it” (Baker, 2016, p. 607). The paper has asked whether or not this may be due to learning analytics over-emphasising the broad at the expense of the specific or contextual. It may also be due to the nature of how institutional IT projects are prioritised leading to indefinite postponement of contextually specific projects. The case illustrates that technological trends are making teacher DIY learning analytics are possible, if only in very limited situations, and has provided an indication that ubiquitous, embedded and contextual learning analytics can enable and encourage positive and unplanned usage. Suggesting that enabling and encouraging teacher DIY learning analytics in the form of more generative institutional learning analytics implementations may offer an interesting and fruitful direction.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance, 41–41.

Ferguson, R., Macfadyen, L. P., Clow, D., Tynan, B., Alexander, S., & Dawson, S. (2014). Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. https://doi.org/10.18608/jla.2014.13.7

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis (No. 9780984660155). Austin, Texas.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Computing Surveys, 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75.

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London.

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Exploring options for teacher DIY learning analytics


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

A few of us recently submitted a paper to ALASI’2017 that examined a “case study” of a teacher (me) engaging in a bit of DIY learning analytics. The case was used to drawing a few tentative conclusions and questions around the institutional implementation of learning analytics. The main conclusion is that teacher DIY learning analytics is largely ignored at the institutional level and that there appears to be a need and value to support it. The question is how (and then if supported, what happens)?

This post is the start of an exploration of some technologies that combined may offer some of the affordances necessary to supporting teacher DIY learning analytics. The collection of technologies and the approach owes a significant amount of inspiration to Tony Hirst, especially in this post in which he writes

What I care about are some of the features that Docker has, and how I can use those features to make my own life easier, … supporting personal, DIY, BYOA (“bring your own app”) IT that works at an individual level in the form of end-user applications, or personal digital workbenches

The plan/hope here is that Docker combined with some other technologies can provide a platform to enable a useful combination of do-it-with (DIW) and do-it-yourself (DIY) paths for the institutional implementation of learning analytics. The follow is mostly documenting ad hoc exploration of the technologies.

In the end, I’ve been able to get working a Jupyter notebook working as a JSON API and started explorer docker containers. Laid the ground work for the next step which will be to explore how and if some of this can be combined to integrate some of the work Hazel is doing with some of the Indicators work from earlier in the year.

Learning more – Juypter notebook JSON api

Tony provides a description of using Jupyter Notebooks to provide a JSON API. Potentially this provides a way for DIY teachers to create their own MAV-like server.

Tony’s exploration is informed by this from some aspect of IBM that aims to introduce the Jupyter kernel gateway (github repo)

The README.md from github repo mentions serving HTTP requests from “annotated notebook cells”. Suggesting that the method of annotation will be important. The IBM example code that each API call is handled by a particular block starting with an appropriately formatted comment i.e.

single-line comments containing a HTTP verb … followed by a parameterised URL path

Have a simple example working.

Deploying – user experience

The IBM bit then goes about using Docker to to deploy this API. But before I do that. Lets get some experience at the user en with Tony’s example.

  1. Install VirtualBox
    Question: Is this something a standard user can do?
  2. Install vagrant
  3. command line to install a vagrant plugin

    Question: Too much? But can probably be worked around.

  4. Download the repo as a zip file.

    Had to figure out to go back to the repo “home” to get the download option (long time between drinks doing this).

  5. Run the vagrant file

    Ok, it’s downloading the file from the vagrant server (from the ouseful area on Vagrant).

    It’s a 1.66Gb file. That size could potentially be an issue, suggesting the need for a local copy. Especially given the slow download.

    An hour or two later and it is up and running. There’s a GUI linux box running on my Mac.

Don’t know a great deal about the application that is the focus, but it appears to work. It’s a 3D application, so the screen refresh isn’t all that fast. But as a personal server for DIY teacher analytics, it should work fine, at least in terms of speed.

Running it a second time includes a check to see if it’s up to date and then up it pops.

The box appears to have Perl, Python and Juypter installed.

Deploying – developing a docker/container/images

This raises the question of the best option for creating and sharing a docker/container/insert appropriate term – I’ll go with images – that has Jupyter notebooks and the kernel_gateway tool running. At this stage, this purpose seems best served by a headless virtual machine with browser-based communication the method for interacting with Jupyter notebooks.

Tony appears to do exactly this (using OpenRefine) using Kitematic in this post. Later in the post the options appear to include

  • Sharing images publicly via the Dockerhub registry
  • Use a private Dockerhub registry (one with the free plan)
  • On a local computer
  • Run your own image registry
  • And, I assume use an alternative.

Tony sees using the command line a draw back for running your own. Perhaps not the biggest problem in my case. But what is the best approach?

Dockerhub and its ilk do appear to provide extra help (e.g. official repositories you can build upon).

One set of alternatives appear largely focused on supporting central IT, not the end user. Echoing a concern expressed by Tony.

Intro from another alternative suggests that docker is becoming more generic. Time to look and read further afield.

Intro to containers

From Medium

  • Containers abstract the OS etc to make it simple to deploy
  • Containers usually measured in 10s of megabytes
  • Big distinction made between containers and virtual machines, perhaps boils down to “containers virtualise the OS; virtual machines the hardware”

    Though interesting, the one tried above required the downloading of a virtual machine first. Update: That appears to be because I’m running Mac OS X. If I were on a Linux box, I probably wouldn’t have needed that.

  • The following seem to resonate most with the needs of teacher DIY learning analytics
    • Using containers can decrease the time needed for development, testing, and deployment of applications and services.
    • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production.
    • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment.
  • Docker is based on Linux and open source, is the big player.
  • Spends some attention on container orchestration – appears to be focused on enterprise IT.

Following offers a creative intro to Kubernetes

Starts with the case for containers (Docker), but then moves onto orchestration and the need for Kubernetes. Puts containers into a pod, perhaps with more than one if tightly coupled. Goes onto to explain the other features provided by Kubernetes.

And intro to Docker

Rolling my own

Possible technology options

Do the following and I have a web server running in Docker that I can access from my Mac OS browser.

AA17-00936:docker david$ docker run -d -p 80:80 --name webserver nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
afeb2bfd31c0: Pull complete 
7ff5d10493db: Pull complete 
d2562f1ae1d0: Pull complete 
Digest: sha256:af32e714a9cc3157157374e68c818b05ebe9e0737aac06b55a09da374209a8f9
Status: Downloaded newer image for nginx:latest
f1f6925acc31f80faf726358f8de5712458ff3649d2c0626bf3bb37f11d1b070
AA17-00936:docker david$

Dig into tutorials and have a play

Docker share a git repo for tutorials and labs. Which are quite good and useful.

Getting set up with some advice above.

Running your first container includes some simple commands. e.g. to show details of installed images. Showing that they can be quite small.

Question: To have folk install Docker, or do the VM route as above?

AA17-00936:docker david$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              2d696327ab2e        11 days ago         122MB
nginx               latest              da5939581ac8        2 weeks ago         108MB
alpine              latest              76da55c8019d        2 weeks ago         3.97MB
hello-world         latest              05a3bd381fc2        2 weeks ago         1.84kB

Web apps with docker, which also starts looking at the process of rolling your own.

This is where discussion of different types of images commence

  • Base (e.g. an OS) and child images which add functionality to a base image
  • Official images – sactioned by docker
  • user images

Process can be summarised as

  • Create the app (example is using a Python web framework – Flask)
  • Add in a Dockerfile – text file of commands for the Docker daemon when creating an image
  • Build the image

    Does require an account on the Docker cloud

    And there it goes getting all the pre-reqs etc. Quite quick.

And successful running.

Docker Swarm running multiple copies, including on the cloud. Given the use case I’m interested in is people running their own…not a priority.

It does provide a look at Docker Compose files and a more complex application – multiple containers and two networks. Given my focus on using Jupyter Notebooks and perhaps the kernel gateway, this may be simplified a bit.

Seems we’re at the stage of actually trying to do something real.

Create a Docker image – TDIY

Jupyter Notebook, kernel gateway and a simple collection of notebooks – perhaps with greasemonkey script

Misc. related stuff

Bit on microservices (microservice architectural style) pointing out the focus on

principles of loose coupling and high cohesion of services

and in turn a number of characteristics

  • Applications are made up of small independent services

    Is TDIY LA about allowing teachers to create applications by combining these services?

  • Services are independently modifiable and (re)deployable

    But by whom?

  • Decentalised data management: each service can have its own database

    What about each user?

Goes on to list a range of advantages, but the disadvantages include

  • inefficiency – remote calls, network latency, potential duplication etc.

    But going local might help address some of this.

  • Developing a user case could need the cooperation of multiple teams

    This is the biggest barrier to implementation within an instituiton. But raises the spectre of shadow systems, kludges etc.

  • complications in debugging, communication

Microservices and containers covers some of the alternatives.

Seems docker is the place — it’s bought Kitematic and apparently not loved it – a risk for basing the DIY approach on it.

Another part of the story is that you can build your own images and either share them publicly via the Dockerhub registry, keep them locally on your own computer, post them to a private Dockerhub repository (you get a single private repository as part of the Dockerhub free plan, or can pay for more…), or run your own image registry.

Dockerhub is probably the option I want to use here because of the focus on being open, of being cross institutional etc.

Powered by WordPress & Theme by Anders Norén

css.php