From Library Babel Fish, June 12, 2014

One thing that a faculty member said a few weeks ago when we were discussing threshold concepts may have actually been a threshold concept for me. These concepts are the ones that are troublesome, irreversible, integrative, and transformative. It takes you a while to wrap your head around them, and they may disturb the way you think about the world, but they can also change the way you think in some profound way. This new insight of mine may not seem all that profound, but it put a burr under my intellectual saddle and is really making me think. She said that Google seems to flatten knowledge.

Wow. Yeah. Wait, what?

This clicks with my concern that, in trying to emulate the convenience and simplicity of Google and Amazon, libraries are (once again) putting too high a value on volume of information and too little on curation. We have told vendors that we want as much full text as possible in the databases we subscribe to, which has made it harder, not easier, for undergraduates to use a database like Academic Search Premier and find articles they can understand that have been published in journals whose titles their teachers will recognize. Librarians tend to assume that access to more information is always a good thing, and it’s an understandable position to take. Access is important. It is also understandable that librarians have lost faith in their own ability to curate wisely. Most academic libraries in the past bought large numbers of books that nobody ever used – 40 percent isn’t unusual. When money was flush, use was irrelevant; the size of a collections mattered. But now, that kind of more-the-merrier collection building in print seems both filter failure and a gigantic waste of resources. If the same percentage of digital resources we subscribe to aren’t used, at least they aren’t taking up space. Subscribing to bundles also relieves us from the time that curation takes and the risk that we will become censors, choosing not to include in our collections things that are on the fringe, not mainstream. If someone else does the curation, that’s their problem.

But it also snaps into place with my bemusement about the way we make our collections discoverable. When we ported the contents of card catalogs into databases, we kept the same data structures. We could search by authors, titles, and subjects, and included bits of description and local location information. The same thing happened in the shift from indexes and abstracts to database retrieval. The only index that did things differently was the Science Citation Index. That tool, originally printed in tiny typeface on tissue-thin paper, tried to make knowledge discoverable through the citation network – tying together work that was related because authors cited or were cited by other authors. The Web of Science reproduced that as a database, but it really only became simple and a highly visible function when Google Scholar was built out of publishers’ records and linked citations. What’s funny about this is that we have always known how absolutely fundamental to researchers the web of citations is at curating and showing relationships among texts. Why did we never think of building our systems around those links? We somehow assumed that the literature indexed itself (to use Stephen Stoan’s memorable phrase) in an obvious and self-contained way and the library’s catalog was a finding aid for local collections. We didn’t change that assumption when we went digital.

The current vogue for discovery layers – licensed software maintained with a great deal of local labor by librarians that allows library users to search both the catalog and licensed databases all at once – is at least in part an attempt to flatten the library’s collection of knowledge just as Google does. Rather than search multiple places, you can search once and get a load of links to books and articles and other materials accessible through the local library, though sometimes only through the interlibrary loan service. Each library has to decide which software package will work best for them and how to set it up without really knowing how the system works, because (like Google’s algorithm) that’s a trade secret. But unlike Google, we have to pay a lot and put a lot of staff hours into it to customize it for a local collection.

This is a far cry from what Vannevar Bush imagined as the future for information management. In 1945 he published “As We May Think,” in which he described the memex, an imaginary machine that could store scientific literature that scientists would mark up with “trails of association.” Those trails could be shared and followed by others. Though this is an amazingly prophetic vision of the way the Web would enable us  to create trails through links, the Web doesn’t work that way anymore. The Internet that was once a publicly-supported network of government and university nodes became a publicly-supported platform for commerce, and in that transformation the relationship between web users and the platforms they used changed, flattened, became both a shopping mall and a surveillance system. We curate and share on the Web like mad, but through platforms that encourage us to be self-branding entrepreneurs who are at once consumer, producer, and product

Academic libraries, likewise, have left it up to publishers and aggregators to curate and control access to the record of scholarship, with knowledge produced in something of a scholarly sweat-shop, ideas as piecework. And academic librarians everywhere have worked hard to ensure that their libraries come as close as possible to being an information Wal-Mart. Access has been transformed into consumer choice, without enough thought about the overall health of the knowledge ecosystem. That’s what it’s like when the world of knowledge is flat.

But knowledge isn’t flat. It’s a set of ongoing conversations among human beings who share and build on and deconstruct and quibble over what to make of the world. We consider it ethical to trace those conversations in our scholarship and are pleased when our contributions are acknowledged and built on. It’s a shame that when libraries devise discovery systems they’re adapting to Google but not to the memex. Discovery layers are great if you’re approaching research as a consumer – I need five peer-reviewed articles and two books on the subject of neoliberalism. Done in five minutes! But flattening the world that way makes it look as if nothing is connected, that sources are things produced through some mysterious process that are selected put in your shopping basket for checkout.

Is there some other way that libraries could enable discovery that is less flat, that helps make the communities of inquiry and the connections between ideas easier to follow? Is there a way to help people who want to join those conversations see the patterns and discern which ideas were groundbreaking and significant and which are simply filling in the details? Or is curation and connection too labor-intensive and inefficient for the globalized marketplace of ideas?


Icon for the Creative Commons Attribution 4.0 International License

Babel Fish Bouillabaisse Copyright © 2015 by Barbara Fister is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.