Any one for a grant application/research project?

The Excellence in Research for Australia (ERA) Initiative combined with various local institutional and personal factors has increased the interest of some colleagues and I in biting the bullet and entering the next stage of academic development. i.e. the preparation, submission and hopefully successful receipt of applications for external funding. This post is the start of some reflections and thinking about how and what to do.

I am starting from the assumption that it is now an institutional requirement that we undertake this task and be successful at it. There are a range of questions about the validity of aspects of this task and the thinking that may be displayed below. In part, this is purely pragmatic and, hopefully, will in part be mitigated by trying to knock the rough edges off that pragmatism by various other strategies.

This post is in part sparked by the announcement of a grant writing workshop being held at our institution next Tuesday. The announcement included some questions that we’re expected to have considered prior to the workshop. I’m using those questions as the structure for the rest of this post.

What is the research question/idea?

The basic idea/project is based around the Indicators Project. We’ve developed a tag line for the project, which summarises the basic aim

Enabling comparisons of LMS usage across institutions, platforms and time

i.e. we want to look at the usage – via system logs – of learning management systems (LMS) within universities. In particular, we want to look at, understand and compare the usage of LMS between different institutions, different LMS and do so longitudinally.

The question is how do we translate that into something that is important and attractive to the folk evaluating research proposals and is something we can actually do.

The answer to that, to some extent, is going to lie within the aims of the particular research grants and a range of other factors. For this project there are probably two very different types of grants:

  1. ALTC (teaching and learning) grants; and
    There are also some internal L&T grants, this will focus on the ALTC grants.
  2. Research grants.
    In our context, the ARC national competitive grants program is the nirvana. The type of grant you need to get to say you’ve arrived as a researcher. There are also equivalent grants internally at most universities within Australia. These are the stepping stone to the externally competitive grants.

This project could connect with both sides. The ALTC grants could be seen as helping with the “development” side of the project in terms of developing technology/knowledge support institutions to look at their LMS logs and those of others. The research grants are more connected with generating theoretical knowledge. For this project, identifying the ALTC grant maybe a bit easier.

ALTC grants

This project is probably best suited to a ALTC competitive grant. However, on the negative side there ALTC did fund an analytics related project this year. It built on a 2007 project. Any related project would have to work hard to demonstrate a difference between that project but still have some sense of connection or accumulation. The details of these grants are available from the ALTC site, so we can find out what they are aiming to do and see what we can do differently. Will be interesting to see if the idea could get up.

Can this idea be turned into a priority project – “academic standards, assessment practices and reporting”, “curriculum renewal” – are two of the priorities.

Priority grant applications are due April. Competitive due June.

ARC grants

As per the ARC site these are scheme-based (basically discovery and linkage) and across 6 inter-disciplinary groups

  • Biological Sciences and Biotechnology
  • Engineering and Environmental Science
  • Humanities and Creative Arts
  • Mathematics, Information and Communication Sciences
  • Physics, Chemistry and Earth Sciences
  • Social, Behavioural and Economic Sciences

I’m somewhat doubtful we can aim for these. Might be a step too far. The indicators, at initial glance, probably doesn’t fit nicely within those disciplines in order to be seen as something important. Think we’re going to have to identify the discipline and then get to know what things are important to the discipline. This page has lists of previously successful applications grouped by a number of ways including by field.

Discovery objectives are:

  • support excellent fundamental research by individuals and teams
  • enhance the scale and focus of research in the National Research Priorities
  • assist researchers to undertake their research in conditions most conducive to achieving best results
  • expand Australia’s knowledge base and research capability
  • foster the international competitiveness of Australian research
  • encourage research training in high-quality research environments
  • enhance international collaboration in research

Linkage grants might be the way to go

The Linkage Projects scheme supports collaborative research and development projects between higher education organisations and other organisations, including within industry, to enable the application of advanced knowledge to problems. Typically, research projects funded under the scheme involve risk.

The idea might be to link with some of the LMS vendors.

Discovery applications look like closing March 2010. Linkage in May 2010.

This seems to be the page for seeing examples and getting some ideas.

Why is it important? What is significant and innovative in the study?

The first question is somewhat easy and has probably been made by the existing funded ALTC grants identified above. The points are:

  • LMS are almost, if not, ubiquitous within higher education. Everyone has one.
  • Few if any institutions are using the data about the use of these systems to drive decisions at any level.
  • Drawing on this information can be used to improve the quality of learning and teaching at a number of different levels.
  • More broadly, L&T is important in the knowledge economy etc.

Work will need to be done to connect this with priorities set by the funding agencies and/or the government.

The second question is a little more difficult given that there is existing work in the field (ALTC grants) and the question about how this gets linked under disciplines within ARC grants. More work needed here.

Do I need a research team on this project and who should be a team member? Why?

Some reasons for team members:

  • Additional areas of expertise – e.g. statistics, educational research methods.
  • Prestige and track record. i.e. this is important, I believe, for ARC grants. At the moment, I don’t think the existing indicators project members have the appropriate level.
  • Other organisations. e.g. folk at other institutions to enable use of that institution as a site or perhaps someone who works for an LMS vendor or similar for the linkage grant angle.

Will need to look at the allowed size of teams.

What will be the key outcomes/contributions to the area/discipline?

This is related to the question of how it’s different from other work. Most broadly increased information to aid decision making. Perhaps the project needs to include an aim to identify, develop, or research how that decision making can be improved. i.e. not enough just to provide the information, need help on designing interventions and how the data is used to improve practice.

Perhaps guidelines/theory/knowledge that guides e-learning/learning and teaching becomes an outcome?

Misc other things

There was a Discovery project funded from 2009 title “Competing on Business Analytics” awarded to some IS academic from Uni Melbourne. Might have some useful insights. More info here and here. The latter provides some potential useful ideas.

Reflections on asw2a

The following is a collection of reflections on the Web 2.0 Authoring Tools in Higher Education Learning and Teaching: New Directions for Assessment and Academic Integrity National Roundtable that I attended yesterday.

In summary, while I question some of the assumptions underpinning the roundtable the event was worthwhile in some of the people I met and some measure of reassurance. The reassurance was two fold

  1. institutional; and
    Even with various problems and issues, CQUniversity isn’t really that far behind in getting its head around Web 2.0 (damning with faint praise as I didn’t see any other institutions who had made even half serious head way.
  2. individually.
    The work I and other “lone rangers” have been involved with at CQUniversity has been just as good, and in many cases significantly ahead, of what other folk are doing.

It was particularly interesting to hear from this crowd of mostly educational folk that they are facing similar constraints, limitations and problematic relationships with IT.

Limitations of the roundtable

The organisers had gathered 30 odd people from across the higher education sector from various different backgrounds – mostly experience with using Web 2.0 and academic integrity – and were trying to collect that experience and distill out principles to guide some experiments they’ll be performing next year at Uni Melbourne, Monash and RMIT. This was to be done in one day with little or not pre-work apart from reading a discussion paper.

Given the constraints the organisers had to work within and what they were trying to achieve (getting some level of agreement out of a disparate group of academics), they did a pretty good job. However, I was left with some questions/observations:

  • Given there was only time to two group breakout sessions, my interaction with participants was not complete. I felt that more time to become aware of the experiences and lessons learned from all the participants might have been helpful. But then, I found hearing about the experiences of others a major takeaway.
  • Given the focus was Web 2.0 in assessment, the use of Web 2.0 to scaffold the workshop seemed somewhat limited. There was a wiki but it was set up fairly late and not used by many of the participants (one of the problems with more use of Web 2.0 would be getting participation) but there wasn’t a suggested twitter hash tag, there weren’t really any specific activities designed before, during or after the roundtable to use the wiki. The wiki was mostly used by session scribes.

What is Web 2.0? Should we use something else?

The use of the Web 2.0 term seemed to be somewhat problematic. At the least I and a few other attendees don’t particularly like the term for various reasons. But the main problem I had was that it was defined for the purposes of the roundtable to encompass a broad array of technologies for which it was impossible to generate general principles (even if you get over the questions I have about principles below). Are there specific principles for assessment that apply for Second Life, public blogs, twitter and a wiki within Blackboard?

Can we identify web 2.0 assessment specific principles?

Through out the roundtable I and many of the other participants (including the project staff) kept saying, “but how is that different from normal assessment”. That is, many of the principles that were being generated are principles that you (should) follow for all assessment, not just Web 2.0 in assessment. I can’t really recollect any principles that weren’t either:

  1. general assessment principles, or
  2. principles associated with the affordances of Web 2.0 tools.

Which led me to wonder whether or not there it is possible to generate principles for Web 2.0 use in assessment or whether it is a case of taking known assessment principles and combining them in contextually appropriate ways with good knowledge of the affordances of the specific Web 2.0 tools.

On reflection, drawing on the view of TPACK, it could be argued that there is specific knowledge here. That is, from a TPACK perspective you might argue that the combination of assessment principles and the technology affordances is itself a complex enough combination to be classed as specific knowledge. From this knowledge, you might well be able to identify something called general guidelines/advice (I’m still reluctant to use principles).

So, perhaps that’s a way forward. Rather than focus on the abstract and general Web 2.0 term. Focus on a specific tool (e.g. blogs). Specify what the affordances are of that tool. See how you can best combine those affordances with known assessment principles and derive guidelines from there.

An example from the blogosphere?

As I’m writing this a couple of folk have retweeted Mike Bogle’s blog post from today on blogs, reflection and learner-centered design. What I see in this post is a description of many of the affordances of blogs and how learning/assessment design might need to change to leverage those affordances effectively.

And in this paragraph

What is required though, is a fundamental rethinking of the idea of learning spaces to recognise distributed, learner-centric designs, and then having done that, a look at the available mechanisms – such as RSS and tagging – that will enable courses, programs or instructors to filter out only the content that is relevant to them and re-purpose it in a format that will streamline administrative, assessment, or grading requirements of the course.

Mike describes something similar to my first experiment with using open blogs for individual student reflection.

The rest of his post also reinforces for me one of the drawbacks and one of the difficulties encountered in this experiment. The drawback was that we didn’t design the assignment to encourage connections between students. We focused solely on the individual student blog as a way to make the learning visible to the staff member. I think encouraging (not necessarily requiring) student blog posts to be visible to other students would have opened up unexpected benefits. It may also have been somewhat difficult.

The difficulty in this experiment was that we didn’t change the mindset of the academic staff to meet the affordance of the technology the assignment design was trying to emphasise. The intent was to make student learning/development visible to staff, so they could engage and guide students. But the part-time staff weren’t used to and weren’t paid in ways that encouraged them to engage.

This idea needs more work

Difficulty with principles

In general I have a problem with principles. Especially principles that advise (not) to do something, but don’t provide any information about why this is a good/bad idea. Principles without the underlying theory or knowledge severely limit the ability for the principles to cross contexts. Each assessment item is occuring in a very different context and a principle that worked in one won’t necessarily translate easily to another context. You need the theory/knowledge underpinning it so you can decide whether it applies to your context and how it might need to be tweaked.

Then there’s the whole problem of e-learning being new. If, as Bates (2005) suggests, e-learning within higher education is still in its infancy, even after 10 years of LMS usage. What does this suggest for the use of Web 2.0. Infancy was a popular adjective for Web 2.0 for assessment at the roundtable.

So, if we’re still learning how to use it effectively, how can you generate principles?

Nutritionists and the chasm

It was my observation that all of the folk there yesterday talking about Web 2.0 were innovators or very early adopters. Folk on the left hand side of Moore’s chasm shown in the following image.

The majority of academics at universities are on the right hand side of the chasm. They are significantly different from the innovators. What excites them, what encourages them to change is significantly different. Moore (2002) and Geohegan (1994) have some specific things to say about the nature of this difference and the problems it poses for technology mediated learning.

Any of the principles from the roundtable are likely to be better suited for the innovators, not the majority. Hence, they are likely not to work.

The analogy I used in this blog post about super-fit, nutritionists generating principles for obese people to lose weight still resonates here.

Yet another fad

12 years ago I was told the Director of the Distance Education division at my then institution had said in a meeting something along the lines “The World-Wide Web will have little or no impact on distance education”. Last year, when the idea of Web 2.0 tools possibly offering an alternative to the LMS as the institutional response to e-learning another senior person said “That’s just an opinion.”. I was an early adopter of both the Web and Web 2.0. I saw the Division of Distance Education take control of web-based learning about 10 years ago and I fully expect the institution to leap onto Web 2.0 in the next couple of years.

At this stage, I see the use of Web 2.0 being yet another fad in technology mediated learning within universities. The focus will be on project managing the implementation of the technology with little or no notice that the entire policy, practice and structure of higher education is actively constraining innovative learning and teaching. For this reason, I don’t expect there to be any significant change or improvement in the pedagogy used in higher education through the introduction of Web 2.0. There will be too much herding cats, and not enough focus on creating an environment that enables weight loss.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

Moore, G. A. (2002). Crossing the Chasm. New York, Harper Collins.

Web 2.0 in assessment – an opening statement?

Tomorrow I am off to attend a “National Roundtable” on Web 2.0 Authoring Tools in Higher Education Learning and Teaching: New Directions for Assessment and Academic Integrity. Quite a mouthful. I’ve spent some of the traveling time today reading the discussion paper and trying to formulate what I might be able to contribute. We’re meant to give an opening statement as the lead off. What will I say?

Social media is promising, but it ain’t everything

First, I’ll use social media instead of Web 2.0. I feel that social media in the true open sense – not that of a wiki/blog locked within an LMS – has some very promising affordances that could enable a lot of interesting learning and teaching – and subsequently assessment – practices.

Following Postman’s ideas with any technological change there is a trade-off

This means that for every advantage a new technology offers, there is always a corresponding disadvantage. The disadvantage may exceed in importance the advantage, or the advantage may well be worth the cost.

However, to a large extent I think discussing this trade-off, focusing on how the existing higher education system can harness social media through good practice, is ignoring the elephant in the room. i.e. that most of what passes for learning and teaching within universities is not all that good. That most of the practice around supporting learning and teaching through policies, strategies, technologies and processes at best doesn’t support innovation and transformation and at worst actively constrains it. For example,

current institutional policies, including teaching and learning quality measures and lack of resources, are compromising the way subjects are delivered. In some cases academics are discouraged from improving their teaching practice (Tutty, Sheard et al, 2008)

To some extent the aim of the roundtable is a bit like a convention of nutritionists getting together to come up with principles of good nutrition for weight loss. While there is great benefit in the outcomes of such work the chances of them encouraging the significant percentage of the population that is currently over weight is very, very limited. Given the current nature of the higher education system – and particularly its work around technology mediated learning – the chances of a majority of teaching staff within higher education using the outcomes of this roundtable is similarly limited.

That said, to me social media tools seem to be obviously the next evolution of technology that will offer some new and very interesting affordances for learning and teaching that raise some very interesting questions for policy and practice at higher education. Anything that seeks to share knowledge about what has worked, and importantly, what hasn’t is useful and to be applauded and I look forward to learning from the experience and knowledge of those gathered here today.

To finish on a contrary note, I don’t see much value in the sharing of good practice unless the sharing moves beyond the simple listing of what was done. Why it was done and why it worked (or didn’t) is more important and is necessary in order for good practice to be moved from one context to another.

References

Tutty, J., J. Sheard, et al. (2008). “Teaching in the current higher education environment: perceptions of IT academics.” Computer Science Education 18(3): 171-185.

Business intelligence and PEBKAC

Context

I’m currently sitting in the QANTAS club at Canberra airport waiting to return home after a week at ANU working on the PhD (being done through ANU). I decided to read a copy of CIO magazine while having brunch. In doing so I came across this article (Rodgers, 2009) title “Mind your own Business Intelligence.

This caught my eye because it mentions business intelligence. Business intelligence is very close to, for some it encompasses, the kind of work we’re starting in the Indicators project.

Business intelligence has dropped from the top 10

What interested me was this paragraph from the article

Perhaps even more surprising, however, was the fact that business intelligence dropped out of the top 10 items on CIOs’ agendas next year. Getting the right information to the right people at the right time for the right cost is what it takes to succeed in today’s business environment. Perhaps business intelligence sliding out of the top 10 is an indication of just how difficult BI is to achieve. BI is a moving target; it’s something that senior IT execs must constantly monitor and review to ensure that their organisation is getting the right information to key employees.

I find this interesting on two fronts:

  1. CIOs are losing interest in business intelligence.
  2. The slight touch of PEBKAC being implied as one of the problems.

PEBKAC

PEBKAC is an acronym devised by IT professionals as a code word for user error. i.e. the stupid user has made another mistake. PEBKAC expands out to Problem Exists Between Keyboard And Chair – i.e. the user. There is a tendency for IT folk to blame the user, instead of the technology.

The Wikipedia page on PEBKAC provides an important and interesting alternate perspective

Interface designers dismiss the blame on users for such trivial errors, arguing that a system that induces users to make mistakes is a badly designed one. By not taking human factors into consideration, its specification is incomplete and can make false or untested assumptions about the experience, knowledge and natural limits of their expected audience. Since the design lacks a major source of requirements, the resulting system will not be tailored to its purpose. The misunderstanding of the system that leads to the error is fault of the designer, not the user.

i.e. the problem is that system is designed for people to use correctly.

From this perspective, perhaps the problem with business intelligence isn’t that IT execs haven’t been constantly monitoring and reviewing the use of business intelligence to ensure that “their organisation is getting the right information to the key employees”.

Perhaps the technology the processes IT are using around business intelligence are broken. Perhaps they have not taken “human factors into consideration, its specification is incomplete and can make false or untested assumptions about the experience, knowledge and natural limits of their expected audience”.

I agree that business intelligence is really difficult, I believe most of its problems is that the tools and processes used to implement BI within organisations is broken.

The history of technology mediated learning

In an earlier post I argued that there is a highly visible hype cycle around e-learning/technology mediated learning. I think the same hype cycle exists in broader technology. This hype cycle is more closely aligned with Birnbaum than that of Gartner.

The four (grown from three) phases in the cycle I use are:

  1. Technological spark – some new technology sparks interest, or enables new capabilities, solves an existing problem.
  2. Growing revolution – a collection of folk find the spark important and a “club” grows around the technology building it up as the saviour of all.
  3. Minimal impact – oops, it didn’t really make all that much difference, not many folk used it. Oh well.
  4. Resolution of dissonance – the smart folk who pushed the revolution have to explain away why they were wrong. They can’t blame themselves. So they blame the users.

I think business intelligence may be getting into stage 3. Just like LMSes. But with LMSes, everyone is now going open source. The next fad.

References

Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco, Jossey-Bass.

Rodgers, M. (2009). Mind your own Business Intelligence. CIO. Summer 2009/2010: 4.

Web 2.0 tools in assessment in higher education

Next Monday I will be at the University of Melbourne participating in a “National Roundtable” title “Web 2.0 Authoring Tools in Higher Education Learning and Teaching: New Directions for Assessment and Academic Integrity”. This is being run as part of an ALTC project titled Web 2.0 authoring tools in higher education learning and teaching: New directions for assessment and academic integrity

The purpose of the roundtable is to “review experience and make joint recommendations for good practice guidelines”. The aim of this post is two fold:

  1. Force me to actually think a bit about what I know/think about this topic.
  2. Encourage others to share, disagree and improve what I think through their own contributions and experience.

I’m going to use social media, rather than Web 2.0.

I’m also traveling today, so I don’t think I’ll finish this entirely today.

A matter of cultures or mindsets

The way I’m currently thinking about this is the question of cultures or mindsets. There are (at least) two different cultures/mindsets involved here:

  1. The social media culture.
  2. Assessment/learning and teaching within higher education.

For me, good practice around using social media in assessment in higher education is about making sure that these cultures match. I also see this as potentially the biggest hurdle. I’m not sure that the culture of assessment/learning and teaching in higher education can easily match the culture of social media.

This blog post is titled “The impact of corporate culture on social media (An IBM Case Study)” and includes the slides from a presentation at a conference. The blog post includes the following

That culture is, in my view, the most overlooked, underestimated factor determining whether social media succeeds or fails in a company. And when corporate culture and social media are pitted against each other, social media will always fail. Always.

Disclaimer: I recognise that there are problems with using culture in this way. e.g. some claim there can be no culture within organisations, others disagree. I’m using it, I guess, in terms of the “way we do things around here”.

Mono, multi and the majority culture

I do recognise that learning and teaching/assessment in higher education is not a mono-culture. There are many and varied cultures that differ in many ways. However, I think there is a fairly large group of folk involved in learning and teaching within higher education that have much in common. For this I turn to Moore’s chasm and Geoghegan (1994). This is the idea shown in the figure below that shows a chasm between the innovative folk and the pragmatic folk.

The chasm idea is that the folk to the left of the chasm are very different to those to the right. Geoghegan’s (1994) idea is that most of what happens in higher education around instructional technology is designed for the folk to the left of the chasm. This is why most use of instructional technology has little adoption and what is adopted is of poor quality.

Geoghegan (1994) describes the difference between these two groups via the following table.

Innovators Pragmatists
Like radical change Like gradual change
Visionary Pragmatic
Project oriented Process oriented
Risk takers Risk averse
Willing to experiment Need proven uses
Self sufficient Need support
Relate horizontally (inter-disciplinary) Relate vertically (within discipline)

Rather than focus on the culture of the innovators. I’m going to focus on the “culture” of the pragmatists. I do this because they represent by far the majority of people within higher education. Also because the innovators, to a large extent, will take care of themselves. Lastly and perhaps most importantly, because I increasingly see that management and the Information Technology folk within many universities are, for various reasons, increasingly more like the pragmatists, than the innovators. As such, they can and do constrain what happens.

Culture/Principles of social media

I’m sure there’s a lot of this stuff out there. But I’m going to use these principles and in particular the remixing of them here. The 10 principles are (quoted from here):

  1. Decentralization is freedom: Freedom enables us to pursue our thoughts and interests in a social space. Thus decentralization is of primary value.
  2. Information wants to be free: The cost of obtaining information is rapidly declining, but still capable of providing and creating value. Freedom is necessary for free information.
  3. Findability is power: Without findability, the information’s ability to provide and create value is greatly diminished.
  4. Karma is real: The more you give, the more you get. You just don’t know what at the point at which you’re giving.
  5. Rules beget rules: At some point, organization happens so that common understanding of interactions are possible.
  6. Economies have currencies: Trade is possible with Karmic infrastructure and rules of engagement.
  7. Communication is blood: Communication is the transport mechanism for information flow.
  8. Immediacy in all things: Acting on new, validated information when appropriate moves things forward more quickly than before.
  9. Context is fluid: Things change more often, as does your frame of reference. Think about the information you have at various points and look at developments along a continuum.
  10. Associations are inherently good.
  11. The mismatch with the culture of the pragmatists

    I’m running out of time, so cutting this short. My current position is that the culture of the pragmatic, corporate university and the perceptions of the majority of staff are not a good match with the principles of social media.

    For example, decentralisation, free information, karma, communication, immediacy and others don’t fit.

    Flight being called.

    Drawbacks of good practice and the need for theory

    Dave Snowden is not a big fan of best practice and I like his reasoning. The folk sponsoring this roundtable use the “best practice lite” term that seems to be quite popular in higher education at the moment – “good practice”. It seems to remove the strong one and only nature of “best”, but it retains the main problem.

    The main problem is the attempt to duplicate a practice from one context to another. If these contexts are 100% the same then this might be possible, depending on how much you can learn about the practice. But with most universities being complex systems, this is not possible. Snowden takes the approach of understanding the underpinning theory and using the theory as the basis to design an intervention that makes sense within the new context. Rather than repeating what worked in a different context.

    This is what underpins my preference for understanding the culture of social media and universities, for understanding the theory.

Twitter back channels, conferences, sessions and engaging the audience

A couple of days ago we did an experiment around presentations that included the use of a twitter back channel (hashtag: #eair). The following is a reflection on what that allowed me to do and implications for conferences.

The question I end up at is “Should conferences have twitter hash tags for individual sessions?”

What I could do

After I finished the presentation I was able to review the tweet stream and see what people had said. This gave me a different and much greater understanding of what some of the audience were thinking and talking about. It was also frustrating because it revealed that I hadn’t effectively engaged some of the audience, they didn’t get “it”. I did end up replying to a number of tweets to expand on ideas or give alternate perspectives. The tweet stream enabled me to extend the conversation.

This ability has given me a better understanding of what worked and didn’t work for the presentation. I plan to try and use this for all my presentations.

The problem

It was easy to do this because I used a hash tag that was unique to my presentation – #eair. As I’m the only one use that tag for my local presentations, that should be ok. But conferences are difference. I’m just back from EDUCAUSE’09 which used the tag #educause09 for all discussion. There wasn’t a tag for individual sessions. So I couldn’t easily couldn’t easily find what people had said about a particular session, including the one I gave.

This absence/difficulty of tracking tweets for a particular session reduces the number of possible conversations.

Ignorance alert

I haven’t really gotten into the Twitter thing as much as others and was pre-occupied during much of EDUCAUSE’09. Is there a simple, well-known solution that I’m missing?

Solutions oriented

I keep being told I should be solutions oriented – apparently I’m cynical/pessimistic. So here’s my initial idea.

Each session at a conference get a unique number. That session number get added to the conference hash tag.

Example, ASCILITE’09 is using (I think) #ascilite09. If I’m tweeting in session 55, I’d be using a tag something like #ascilite09s55 or #ascilite09#55.

The drawback, based on very little testing, is that this would split off the conference tweets into the sessions. Perhaps the conference would need an aggregator? Perhaps, people just have two different searches? Perhaps the conference creates a Twitter list?

Don’t know enough to say. But the ability for each session tweet stream to be identifiable would be something I’d use.

Future of universities – an age old problem

I’m in the midst of preparing some additional slides for a presentation/experiment tomorrow around alternatives for the LMS and the lecture. In part, this presentation connects with the future of universities – and perhaps learning in general – a topic that seems to have gotten increasing airplay in the last year or so. Especially in the form of pundits predicting problems with current practice. This post is in part about showing that this is not a new thing, but also about saving some nice quotes for future use.

As part of the work on the slides, I was doing a quick Google on “origins of the lecture”. One of the the bits I came across was a book titled In search of the virtual class: education in an information society in which I found the following quotes leading off chapter 4

Learning processes are lagging appallingly behind and are leaving both individuals and societies unprepared to meet the challenge posed by global issues. This failure of learning means that human preparedness remains underdeveloped on a global scale. Learning is in this sense far more than just another global problem: its failure represents, in a fundamental way, the issue of issues. (Botkin et al, 1979, p9)

I like the idea of “issues of issues”. Placing this problem of education as a fundamental problem for so much else. I also like that this comment was made 30 years ago. Something that illustrates one or more of

  • The long-term importance of the idea.
  • The on-going difficulty of doing anything meaningful about this problem.
  • The on-going market that exists for people to profess fundamental flaws within the education system.

A similar quote in the same location

There is only one problem and that is education, all other problems are dependent on this one (President Domingo Faustino Sarmiento – founder of Argentina’s national education system)

This one adds the potential observation that if all you have is a hammer (i.e. you are in the education business) then everything you see is a nail (education is the solution).

The book

The book these quotes come from seems, from my current limited reading, to be on the predictive books from the mid-1990s arguing for how technology can/will radically improve/change learning and teaching. The following is from page 73

Why is education out of step with society’s needs? Does the problem lie in the way education is administered, the methods of instruction and the content of the curricula? These are the issues that advanced industrial societies focus on as they attempt to find a solution. Our concern si with the extent to which the problem lies with the classroom as a communication system for learning. Our argument is that the classroom is a technology that emulates the way people live and work in an industrial society. It does not relate to the way people will live and work in an information society. Some countires are sufficiently into a transition to an information society for the discrepancy to be obvious.

Part of the argument is that the classroom approach is wasteful of resource of space and time.