What is there to know about clickers?

As part of the experiment in presentations I’m planning for later in this month involves the use of alternate types of clickers or audience response systems. The aim of this part of the experiment is two-fold:

  1. Identify a technology that breaks the limitations of the current clickers provided by publishers.
    This includes the need for students to purchase them and being based on a technology that means you have to be in the room to participate.
  2. Identify sound strategies for using them.

This post is about the journey through the literature around clickers and what I’ve found.

Sources

I’m planning to/using the following resources:

  • EDUCAUSE resources on clickers/audience response systems.
    I’ve come across these in recent years and started here because of familiarity.
  • Scholar Google.
    Next strategy was to ,a href=”http://scholar.google.com.au/scholar?q=clickers&hl=en&btnG=Search”>search scholar google for papers on clickers and related topics.
  • Local experience and expertise.
    There’s been at least one staff member at my host institution that has used clickers. I’m hoping to chat with them about their experiences.

Misc. immediate thoughts

Prevalence in the sciences

Clickers seem to be most prevalent within the sciences. The top searches in scholar google for “clickers” included the following journals: Developmental Cell, Life Sciences Education, Journal of College Science Teaching, Astronomy Education Review, Robotics and Autonomous Systems.

I seem to remember a talk by Phil Long linking clickers to the work in the sciences of establishing rigorous pre/post tests for important concepts. Wonder how that will impact use in other areas? Both in terms of the absence of the pre/post tests but also the apparent observation that most usage of clickers is in the sciences? Does TPACK play a role?

There’s a long history

First introduced in the mid-1960s (Kay and LeSage, 2009)

Lit review

Kay and LeSage (2009) provide a recent lit review of “clickers”.

Typically used in large undergraduate classrooms in maths and science. Students like them, but clickers alone don’t improve learning, need appropriate strategies.

Student concerns include:

  • extra effort to discuss answers
  • wanting response to be anonymous
  • discomfort when responding incorrectly
  • distracted by use of ARS (audience response systems)
  • general resistance to new methods of learning

Generic strategy stuff:

  • Explain why ARS being used.
  • Have practice questions.
  • Question design.
    Question setting takes time, every question should have a pedagogical purpose. Various advice on what types of question to use them for. 2 to 5 questions per 50 minutes. Multi-choice. Questions take 5-10 minutes to display, discuss and resolve. Raises the fear of content coverage.
  • ARS used for attendance, participation and engagement.
  • Assessment strategies: formative, contingent teaching and summative.

Another one

Judson (2002) suggests four important findings from a lit review:

  1. Students will favor the use of electronic response systems no matter the nature of the underlying pedagogy.
  2. Academic achievement does not correlate to behaviorist use of electronic response systems, as highlighted by investigations of the 1960s and 1970s.
  3. Despite “high-tech” improvements, the use of electronic response systems within a behaviorist pedagogy has not produced gains in achievement.
  4. Interactive engagement has been shown to correlate to student conceptual gains in physics. Interactive engagement can be well facilitated in large lecture halls through the use of electronic response systems.

Another tack

Beatty and Gerace (2009) take another tack

In other words, don’t ask what the learning gain
from CRS use is; ask what pedagogical approaches a CRS
can aid or enable or magnify, and what the learning
impacts of those various approaches are.

They identify 3 separate efforts to develop a coherent pedagogy for clickers

  1. Mazur’s Peer Instruction
    Regularly insert multiple-choice conceptual questions about the material, if students answer incorrectly, get them to discuss it and answer again. Some empirical support for improvement.
  2. Assessing-to-learn or Question-Driven Instruction somewhat similar.
    Has a specific iterative pattern of question asking and answering that forms the basis for the learning activity, only mini-lectures given on the side.
  3. Another based on related questions and specific patterns.

They propose Technology-enhanced formative assessment (TEFA) which evolved from and supersedes, A2L

Conclusions

Need some more thought about just how this literature might inform my use of clickers in the presentation. Constraints on the presentation may limit this.

References

Beatty, I. and W. Gerace (2009). “Technology-enhanced formative assessment: A research-based pedagogy for teaching science with classroom response technology.” Journal of Science Education and Technology 18(2): 146-162.

Judson, E. (2002). “Learning from past and present: Electronic response systems in college lecture halls.” Journal of Computers in Mathematics and Science Teaching 21(2): 167-181.

Kay, R. and A. LeSage (2009). “A strategic assessment of audience response systems used in higher education.” Australasian Journal of Educational Technology 25(2): 235-249.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php