CILIP West Midlands

Counter Culture: Reshaping libraries

Professor Derek Law

University of Strathclyde

[This is a summary of a lecture given at the University of Worcester in May 2008]

It is some seven years since Marc Prensky first proposed the division of digital natives and digital immigrants. Since then a plethora of reports and studies has talked about Generation Y, Millenials, the Google generation and so on. However where Prensky differs is in proposing that this is not simply a generational shift, but a quite fundamental discontinuity of the sort that happened when the motor car was invented, leaving all the horse related industries to adapt as players in a niche market, to change to deal with the new technology, or simply but slowly to disappear. The evidence does then appear to be mounting in reports from Ciber, JISC, OCLC and others that such a quite fundamental change is indeed taking place. For this new group, which power browses, texts and relies on images, reading and writing become optional lifestyle choices, not a basic necessity. This group is a-literate, relying on Google or Yahoo to acquire just enough information for the task in hand, then cutting and pasting rather than reading and absorbing.

Prensky’s view of content is perhaps even more chilling:

“It seems to me that after the digital ’singularity’ there are now two kinds of content: ‘Legacy’ content (to borrow the computer term for old systems) and "Future" content. ‘Legacy’ content includes reading, writing, arithmetic, logical thinking, understanding the writings and ideas of the past, etc - all of our ‘traditional’ curriculum. It is of course still important, but it is from a different era. Some of it (such as logical thinking) will continue to be important, but some (perhaps like Euclidean geometry) will become less so, as did Latin and Greek. ‘Future’ content is to a large extent, not surprisingly, digital and technological. But while it includes software, hardware, robotics, nanotechnology, genomics, etc. It also includes the ethics, politics, sociology, languages and other things that go with them.”

At the same time content is moving from the authoritative to the consensual. The wisdom of crowds is a much used phrase and tools as varied as Wikipedia and Openwetware reflect this different approach. Oddly, libraries have shied away from dealing with the booming growth of born digital content within their organisations, preferring to haggle with publishers over saving minor sums. No grand unifying theory of e-content collection building has emerged. We have played with the tools and methods of digitisation building cabinets of curiosities rather than coherent collections.

Of course, the end of libraries has been predicted before and yet we are still investing hundreds of millions in new or refurbished buildings. This statement of faith may counterbalance the doomsayers – or may simply prove to be the final museum of the book. It is much easier to define - or better redefine - the skills needed by the profession in facing this new world.

Firstly, we do need to tackle the issue of born digital material. Every organisation has seen a mushrooming growth of everything from raw research data to e-mail, from blogs to wikis. Yet we have no real idea of who is meant to select material for preservation, who will set the standards, who will manage these digital assets and protect their intellectual property and who will manage the access and curation of our organisational outputs. Yet the classic skill of the organisation of knowledge is tailor-made for taking control of this.

One area which has been exploited, but perhaps fitfully is trust metrics. The library – and librarians – have a position of trust as unbiased selectors of material, neutral in our advice on what is useful and relevant. In the e-world trust tends to rely on brand recognition, which in turn revolves around Google and Yahoo. The brand associated with the library is still that of books. However the opportunity undoubtedly exists to make more of our role in managing trust metrics. Call it quality assurance, kitemarking or whatever, but we have an under-rated skill which should be more exploited.

Finally and perhaps obviously there is training. Some of this will rest on information literacy and some on how to use particular products. Law’s Second Law states that “User Friendly Systems Aren’t”. As soon as some software or database arrives without a manual and offering on-line help, we can rest assured that training is needed. For commercial producers sell on difference, not similarity, so that everything from search engines to databases are different. Now it may be that we are moving to more intuitive interfaces, but to gain maximum benefit users need to be shown how to tweak resources to maximum benefit. The trick will be to work out the best time for the user to provide this help and support.

Perhaps the greatest danger we face is complacency, coupled with problem avoidance. It is easy to argue that libraries have existed for 4000 years and have survived every major change in technology and revolution in society so we just need to keep on coping and things will turn out alright. Or we can continue to focus on building new buildings and managing digitised material and its licensing from commercial producers. Or we can take the bold step and start to focus on the born digital material whose growth rate dwarfs anything seen in terms of information and develop a view of how we will select it, curate it, make it accessible, then couple that with advising users on what are information sources of worth and quality and help them to use these resources to maximum advantage.