Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Cataloguing coding

I am not a trained programmer, coding is not part of my job description, and I have little direct access to cataloguing and metadata databases at work outside of normal catalogue editing and talking to the systems team, but I thought it might be worth making the point of how useful programming can be in all sorts of little ways. Of course, the most useful way is in gaining an awareness of how computers work, appreciating why some things might be more tricky than others for the systems team to implement, seeing why MARC21 is a bastard to do anything with even if editing it in a cataloguing module is not really that bad, and how the new world of FRDABRDF is going to be glued together. However, some more practical examples that I managed to cobble together include:

  • Customizing Classification Web with Greasemonkey. This is a couple of short scripts using Javascript, which is what the default Codeacademy lessons use. Javascript is designed for browers and is a good one to start with as you can do something powerful very quickly with a short script or even a couple of lines (think of all the 90s image rollovers). It’s also easy to have a go if you don’t have your own server, or even if you’re confined to your own PC.
  • Aleph-formatted country and language codes. I wrote a small PHP script to read the XML files for the MARC21 language and country codes and convert them into an up to date list of preferred codes in a format that Aleph can read, basically a text file which needs line breaks and spaces in the right places. It is easy to tweak or run again in the event of any minor changes. I don’t have this publicly available anywhere though. PHP is not the most elegant language but is relatively easy to dip into if you ever want to go beyond Javascript and do more fancy things, although it can be harder to get access to a server running PHP.
  • MARC21 .mrc file viewer. I occasionally need to quickly look at raw .mrc files to assess their quality and to figure out what batch changes we want to make before importing them into our catalogue. This is an attempt to create something that I could copy and paste snippets of .mrc files into for a quick look. It is written in PHP and is still under construction. There are other better tools for doing much the same thing to be honest, but coding this myself has had the advantages of forcing me to see how a MARC21 file is put together and realising how fiddly it can be. Try this with an .mrc which has some large 520 or 505 fields in it (there are some zipped ones here, to pick at random) and watch the indicators mysteriously degrade thereafter. I will get to the bottom of this…

The following examples are less useful for my own practical purposes but have been invaluable for learning about metadata and cataloguing, in particular, RDF/linked data. I was very interested in LD when I first heard about it. Being able to actually try something out with it (even if the results are not mind-blowing) rather than just read about it, has been very useful. Both are written in PHP and further details are available from the links:

Nothing to do with cataloguing, but what I am most proud of is this, written in Javascript: Cowthello. Let me know if you beat it.

Update: Shana McDanold also wrote an excellent post on why a cataloguer should learn to code with lots of practical examples.

Sparql recipes for bibliographic data

One of the difficulties in searching RDF data is knowing what the data looks like. For instance, finding a book by its title means knowing something about what how a dataset has recorded the relationship between a book and its title. There is no real standard for publishing MARC/AACR2-style bibliographic data as RDF: it seems libraries publishing RDF are approaching this largely individually, although they are using many of the same vocabularies, dc, bibo, etc. This was one reason why I wanted to create Lodopac: to present some kind of interface so that searchers didn’t need to know these different models but could start to explore them. Below are the Sparql recipes for the different search criteria I used for the BNB and the Cambridge University Library datasets, so they can be compared, re-used, or corrected. All examples use prefixes, which are defined anew in each example. The examples are of course fragments and don’t have all the necessary SELECT and WHERE clauses.

By the way, for an excellent Sparql tutorial with ample opportunity to play as you go along, do have a look at the Cambridge University Library’s SPARQL tutorial. It also gives clues to the way their data is structured. Of use for the BNB is their data model (PDF), which is not nearly as scary as it looks at first, and incredibly helpful.

Author keyword search

This would be relatively straightforward-the unavoidable regular expression being the main complication- but for the fact that the traditional author/editor/etc of bibliographic records can be found in dc:creator as well as dc:contributor which necessitates a UNION. The BNB used foaf:name:

PREFIX dc: <http://purl.org/dc/terms/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>

SELECT ?book

WHERE {

{?book dc:creator ?author} UNION {?book dc:contributor ?author} .
?author foaf:name ?name .
FILTER regex(?name, “author”, “i”) .

}

Cambridge uses much the same recipe except that it uses rdfs:label instead of foaf:name:

PREFIX dc: <http://purl.org/dc/terms/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?book

WHERE {

{?book dc:creator ?author} UNION {?book dc:contributor ?author} .
?author rdfs:label ?name .
FILTER regex(?name, “author”, “i”) .

}

Title keyword searches

This is more straightforward and is in fact the same for both the BNB and Cambridge University Library:

PREFIX dc: <http://purl.org/dc/terms/>

SELECT ?book

WHERE {

?book dc:title ?title .
FILTER regex(?title, “title”, “i”) .

}

Date of publication (year)

I imagined this one being simple and for Cambridge University Library it is. However the BNB took some unravelling as they have modelled publication as an event related to a book. The various elements of publication are then related to the event. So, for the BNB we have this:

PREFIX bibliographic: <http://data.bl.uk/schema/bibliographic#>
PREFIX event: <http://purl.org/NET/c4dm/event.owl#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?book

WHERE  {

?book bibliographic:publication ?pub .
?pub event:time ?year .
?year rdfs:label “date” .

}

By contrast, Cambridge University Library has it in one line:

PREFIX dc: <http://purl.org/dc/terms/>

SELECT ?book

WHERE {

?book dc:created “date” .

}

ISBN

As an identifier, ISBN is relatively straightforward in both models, although care must be taken with the BNB as 10 and 13 digit ISBNs are treated as separate properties and the following assumes that the search will cover both:

PREFIX bibo: <http://purl.org/ontology/bibo/>

SELECT ?book

WHERE {

{?book bibo:isbn10 “isbn”} UNION {?book bibo:isbn13 “isbn”} .

}

For Cambridge University Library, also using the bibo ontology, this is:

PREFIX bibo: <http://purl.org/ontology/bibo/>

SELECT ?book

WHERE {

?book bibo:isbn “isbn” .

}

Conclusion

I didn’t set up to provide ground-breaking conclusions. However, it is remarkable how different data models can be formulated for modelling the same type of data by similar organisations. The real question is whether this is a good, a bad thing, or doesn’t really mattter. Will it need to be standardised? My understanding of how this works is probably not. I think the days of monolithic library standards are probably now gone. I wonder, for instance, if there ever will be a single MARC22 (or whatever you like to call it) and doubt RDA will ever completely replace AACR2 in the way we imagine. What will emerge I suspect will be a soup of various standards and data models, some of which will be more prevalent. One thing I picked up from various linked data talks is that information has frequently been published then re-used in ways that the issuers never imagined; if that is the case, the precise modelling and format is probably not as important as the fact that it is of good quality and intelligently put together. The BNB and Cambridge University Library models are clearly quite different but quite capable of being mapped and used despite this.

If there are any other bibliographic data Sparql endpoints, I would like to include them in a future version of the Lodopac search. Do let me know if you come across them.

More mundanely, do say if there are errors in my Sparql recipes or if there are ways they could be done more efficiently.

Customizing Classification Web with Greasemonkey

Classification Web is ace, but there are a couple of things about the interface that annoy me and, in one colleague’s case, seriously put him off using it, in particular:

  • The opening of a new tab/window when you click on the MARC view for a subject or name.
  • The confusing menu. We don’t use LCC or DDC, and the browse options don’t really add much, so we only really need two options: Search LC Subject Headings and Search LC Name Headings.

I managed to work out a simple way of modifying how Classification Web works on Firefox using the Greasemonkey add-on and a couple of simple scripts, all of which is quick and easy to install:

  1. Install Greasemonkey: https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/
  2. Make sure the monkey in the bottom-right corner is happy and colourful. Click on it if not.
  3. If you want to prevent the MARC view opening a new window, install the classweb_no_new_ window script by going to http://www.aurochs.org/zlib/js/userjs/classweb_no_new_window.user.js then
  4. Click on the Install button
  5. If you want to reduce the main menu, install classweb_prune_menu script by going to http://www.aurochs.org/zlib/js/userjs/classweb_prune_menu.user.js then
  6. Click on Install button
  7. Reload/refresh Classweb if it’s still open and it should work.

If you want to turn Greasemonkey off altogether, click on the monkey so he’s sad and grey. If you want to stop individual scripts, right click on the monkey, click on Manage User Scripts, select a script from the list, and un-tick the Enabled box in the lower left corner.

These instructions were tested on Firefox 3.5.3 although I imagine they would be fine on any recent version of Firefox. I would be interested to hear anything confirming or undermining that assertion.

If you’re happy to play around, these scripts can be further altered. In particular, you can choose which menu items appear in the pruned menu script:

  1. Right click on the monkey
  2. Click on Manage User Scripts
  3. Select classweb_prune_menu from the list
  4. Click on Edit (you will probably have to select a text editor at this point)
  5. Edit the list of pages under the line var menu_items_to_keep = Array (. Enter each page you want to appear on the menu on a separate line in quotes, with a comma at the end of each line except the last line. The menu item must appear exactly as it does on the Classification Web menu, including capitals. E.g., the default set up looks like this:
    var menu_items_to_keep = Array ( // end each line with a comma except the last line
      "Search LC Subject Headings",
      "Search LC Name Headings"
    );
  6. Save the file, and reload Classification Web.

If anyone else finds this useful or can think of more customizations let me know.

RDA as a closed standard

Resource Description and Access (RDA), the new bibliographic standard to replace AACR2, was released in 2010 on the web as a closed standard sitting behind a paywall. This really worries me. I strongly believe it should be an open standard.

What do I mean by closed?
By closed I basically mean that you have to pay or subscribe to access it. In many ways, this is not dissimilar to AARC2. For decades, libraries (and individuals) paid for various editions of AACR2, which has always been primarily a print product, as well as for various updates when it changed to looseleaf format. Recently it has also been available on the web via Cataloger’s Desktop. RDA is primarily a web product via the RDA Toolkit, although a concession was eventually made to release it in print as well.

An open standard would be one that, according to Wikipedia, “is publicly available” even if it “has various rights to use associated with it”. This would be one which any cataloguer, librarian, or crucially, non-librarian, could see and benefit from.  A practical definition for the purposes of this post would be a standard I could go and look at right now without subscription. I can if I wish apply the standard without hindrance, I can assess it with ease, and, ideally, build on the standard without restriction. RDA is not open, although to be fair, a part of RDA has been released openly, the element set and vocabularies.

Other open standards
Open (publicly available) standards are quite common. Some well known-examples:

The following are open although I’d have a lawyer to hand if you wanted to do anything with them:

Some closed ones for comparision:

These are of course more subjective lists than they look, but you get the idea. The closed list was actually bigger until upon examination I found that JPG, GIF, and even MS Office standards are publicly readable even if I’m not sure what more you could legally do with them. I’d be happy to add more to the closed list to balance things out a little.

Why is RDA not open?
Money. This is a delicate matter that I don’t want to delve into too much although it is obviously central to the openness of the standard. It’s also hard to talk about without appearing to make wild assertions, and I hope I haven’t been unfair. I’ve heard Alan Danskin of the JSC explaining that they’d thought about releasing RDA openly but that they had to cover costs. I’m not exactly sure what the costs of production were, although presumably included expenses and staff costs, and production of the product itself. The last is I think unfortunate as I would like to have seen a far simpler publication of RDA without all the bells and whistles, login barriers, and the need to learn a new interface as well as a new standard. Compare with the HTML4 standard which is a set of simple HTML documents with normal links. I don’t need to learn how to use that. Or, come to that, the MARC21 site. I wonder how much of the fee goes towards setting up and maintaining the RDA Toolkit platform.

With my tin foil hat on, I also wonder how much the fee is needed to resume revenue to the co-publishers since AACR2 has been in unrevised abeyance. 

Why does it matter?
It matters because RDA (and with it all the high quality traditional cataloguing techniques) will not be widely used without being open. I think you can divide the potential RDA userbase as follows:

  1. Libraries with enough money to switch to RDA
  2. Libraries without enough money to switch to RDA
  3. Non-libraries dealing with metadata

Those in group 1 will buy RDA, but some libraries- Group 2- will not see the benefit for the costs of conversion and training, let alone the costs of subscription. For ‘traditional’ cataloguing to thrive, therefore, we need to involve Group 3. However, those in Group 3 will not be able to even have a look at RDA to see if it meets their needs. I think RDA will be lucky to retain the same user base as AARC2, let alone break into new areas and influence the way other metadata is carried out. Those in the metadata community who, I suspect, have already been put off by AACR2, are unlikely to even try looking at RDA if it involves forking out a subscription.

I recently sat in a room with about 15 or so people mostly involved in metadata for institutional repositories and the like. During some discussions they flagged up two problems they were having: establishing a consistent form of name, and a standard set of data elements. I asked myself, would I recommend RDA to them to help solve these problems? Even if I thought it met their needs, could they even have a look to see if it did? No. They will either come up with their own solution or look elsewhere for it, which is already what they have been doing. I can’t see us taking more people with us, just a proportion of the people already using AACR2.

Openness also matters because haven’t a closed standard doesn’t reflect terrribly well on librarianship in general. I have a friend in IT who Laughed Out Loud* when I said the new library metadata standard was behind a paywall. In the new world of openness where even Microsoft loosely adhere to web standards, traditionally closed governments are leading the charge to release more data, and the world has been transformed by the the open standards of the web, are we to follow The Times behind a paywall? Personally I feel libraries, librarians, and library data should be at the forefront of openness, not grudgingly following behind or not following at all.

What could be done?
This is the nub of the matter. I’m no marketing expert and maybe I’m naive and there is nothing that could be done. However, working on the assumption that all that needs to be done is to break even and pay the costs of production for RDA, I would suggest the following ideas for a start:

  • Make a flashy web product anyway and charge lots more for it. Many more well-off libraries would pay for a product like the Toolkit if it’s good enough.
  • There is a need for a more accessible version or versions of RDA, e.g. just for books or in a more convenient format like, say, the Chan books on LCSH or the green editions of AACR2. The co-publishers could fulfill this need which I imagine would be easily done by re-using the data they already have.
  • Explanatory books. There are a number of these on the market or on the way already. The co-publishers could publish an official companion.
  • Consultancy and training. There is going to be a big demand for this soon in any case.
  • Involve more organisations in the drafting and publishing of RDA to share the costs, e.g. publishers, LMS vendors, commercial metadata suppliers, other metadata initiatives. I think it would be a positive and pragmatic move to have these parties on board anyway. They would be more likely to use the high quality standard produced and we would be more likely to be using metadata that meets all our purposes.

See Also
I notice a post covering some of these issues by carolslib from a few days ago. From the Catalogs of Babes also has a similar post, RDA: why it won’t work, from a few weeks ago which much more succinctly makes some of the same points:

Many librarians are balking at the cost of implementing RDA, I think rightfully so, although not for the same reasons. I’m not bitching about it because it’s unaffordable for smaller libraries, or because it’s a subscription rather than a one-time printed book cost (although I think those are valid points). I’m bitching because putting a dollar amount on something, now matter how low it is, will stop people from using something, especially if there’s a free alternative. In this case, I see the free alternative as ‘ignoring rules altogether and/or making you your own standards.’ Requiring a price makes adhering to standards–a key value-added service of libraries and librarians–inaccessible. Which is pretty ironic, considering that libraries are supposed to be all about access. We’re all proactive about offering access to our patrons, but we can’t extend that same philosophy to ourselves, to help us do a better job??

[Updated 23 November 2012] Terry Reese asks Can we have open library standards, please? Free RDA/AACR2.

*** He literally LOL’d, although no ROFLing took place admittedly.

In Our Time booklist

I have written a script which takes an unstructured reading list on the BBC’s In Our Time website, searches the British National Bibliography (BNB) using bibliographica for the books on the list, and returns structured metadata for the records it found.

This script was written in response to an idea raised by psychemedia for the Open Bibliographic Data Challenge: the BBC “In Our Time” Reading List:

The BBC “In Our Time” radio programme publishes suggested recommending reading in the programme data in an unstructured and citation style way: author, title (publisher, year), with what looks to be conventional character string separators between references (at least on the pages I looked at).

The idea is to extract and link suggested readings for the In Our time programmes to open, structured bibliographic data. This would make the In Our Time archive even more useful as an informal (open-ish) educational resource, especially as academic libraries start to release data relating to books used on courses. (So for example, this approach might help provide a link from a course to a relevant In Our Time broadcast via a common book.)

I was drawn to this idea as I like the idea of turning unstructured data into structured data: I have for example had some previous fun converting HTML pages into RSS feeds (e.g. CILIP Lisjobnet, Big Brother). I think something similar for any reading list (e.g. a Word document produced by a lecturer) would be an interesting idea.

The programme is written in PHP and is designed to be fired from a Javascript bookmarklet from a page on the In Our Time site, or by appending the In Our Time URL to the end of the URL for this page: http://www.aurochs.org/inourtime_booklist/inourtime_booklist_v1.php?. For example, to use it on the page for The Mexican Revolution (which I used a lot in testing), add the URL http://www.bbc.co.uk/programmes/b00xhz8d to produce http://www.aurochs.org/inourtime_booklist/inourtime_booklist_v1.php?http://www.bbc.co.uk/programmes/b00xhz8d.

The script follows the following steps:

  1. Set up ARC2 to enable SPARQL searching and RDF processing
  2. Extract Further Reading section
  3. Separate out Raw Data for each book
  4. Determine pattern used in citation then extract Basic Data, e.g. author, title, article, publication, using regular expressions
  5. Further refine elements to make searching easier, i.e. one surname for author, only title proper for titles
  6. Construct a SPARQL Query using author surname and title regular expressions pre-filtered for speed by a significant word using bif:contains
  7. Filter Hits by date of publication
  8. Obtain and display metadata from BNB
More details of these steps are below:

1. Set up ARC2 to enable SPARQL searching and RDF processing

ARC2 is a simple-to-use system for using RDF and SPARQL in PHP. I had previously played with it here when experimenting with creating my own RDF. The Sandy site uses SPARQL to populate the See Also sections.

2. Extract Further Reading section

A simple regular expression identifies the div in the HTML code that contains the reading list, which enables the next stage of the script to look for individual books.

3. Separate out Raw Data for each book

Another regular expression pulls out the paragraphs containing books and puts them in an array. You can see this by viewing the Raw Data.
4. Determine pattern used in citation then extract Basic Data, e.g. author, title, article, publication using regular expressions

As the In Our Time site does not use a single standard form of citation, the script has to try and determine which of several possible patterns a citation is using with regular expressions and extract the correct bits of data. This only works as well as it is possible to identify all the patterns, which effectively means looking at as many In Our Time pages as possible. This is one area that would certainly reward more work. It also points out how difficult it would be to extrapolate this into a script  that could read any citations. The In Our Time booklist currently uses five citations each identified with a number (1, 2, 3, 4, 15, 5). If you look at the Book Data for a particular book you will see the citation style number given. The regular expressions capture author, title, and publication.
The author information in citations on In Our Time is  unpredictable. Sometimes surnames are first, sometimes they are last.  The citation patterns take care of this if possible and try to extract one significant name.
5. Further refine elements to make searching easier, i.e. one surname for author, only title proper for titles.

The script removes things like “(ed.)” from the author, which would obviously throw off a catalogue search. Subtitles- everything after and including semi-colons- are also removed from titles to lessen any chance of variation and lost matches.

6. Construct a SPARQL Query using author surname and title regular expressions pre-filtered for speed by a significant word using bif:contains

Constructing the SPARQL query was the most tricky part. Ignoring the various standard prefixes pilfered from the standard example, the most important part is the title search. There are three unsatisfactory options:
  • Match the extracted title directly to a dc:title. This doesn’t work as the cited title is unlikely to be exactly the same in all matters of words included, spacing, punctuation, etc.
  • Use bif:contains for keyword searching as used in the BNB SPARQL example. This is certainly quick, but has a number of drawbacks: it can only be used once for a single keyword (any one of the two significant words in The history of Mexico, for example, will produce a huge number of hits). It is also not standard SPARQL. I was happy to overlook this if it worked, but ARC2 didn’t like it at all until I worked out it has to be used in angle brackets e.g. ?title <bif:contains> “Mexico”.
  • Use regular expressions (e.g. FILTER regex (?title, “The History of Mexico”, “i”)) for keyword searching. This is extremely powerful: you can easily construct searches but it is so slow as to routinely time out, so rendering it effectively useless.
The In Our Time script uses a combination of the last two techniques to get a result. First, it finds a significant word in the title, ideally the first four letter word after the first word (i.e. to avoid “The”, “A”, “That”, etc.) or, failing that, the first word. The SPARQL query then uses bif:contains to search for that word. The query then does a regular expression filter on the whole title. I don’t know if this is how SPARQL endpoints would generally work, but the BNB appears to only look for regular expression matches on the records already filtered by the bif:contains. In any case, it works.
In addition, the script also uses a regular expression to search by the author’s surname. It doesn’t search by date as the date of publication on BNB (dc:issued) is not in a standard format (e.g. “1994-01-01 00:00:00”, “2005 printing”, “c2006”). It is also not keyword searchable. You can see all the author-title hits with links to BNB records by viewing Hits.
7. Filter Hits by date of publication

You can, however, retrieve the date from the BNB and process it afterwards, which is what the script does. It finds the four digit year and compares this to the four digit year it found on the In Our Time site. You can see all the author-title-date hits with links to BNB records by viewing Date Hits. Perhaps rather arbitrarily, the first book in the resulting array is selected as the result.

8. Obtain and display metadata from BNB

When the search took place, the matching title, author (only one), and date is obtained from BNB. This title and author are displayed, as are the stripped down year of publication, and a link to the full BNB record. For records that returned no hits on the BNB, the Basic Data is simply regurgitated.
The script also downloads the full combined RDF for all the hits is displayed at the bottom of the page, viewable in a several formats.
Further work

I think a lot more work could be done on this given time, both to improve it and to extend it. In no particular order:
  • Make it more pretty. It is currently designed to look merely acceptable while I concentrated on functionality. I have also tried to show much of my working, which a finished version would obviously hide.
  • Refinement of the detection of citation style. This is probably the most critical improvement, and ultimately decides if this approach would be useful outside of In Our Time on other reading lists. There are more patterns that need to be added, especially for older pages.
  • Further preparation of data for searching. Currently, for example, a book on the Mexico reading list doesn’t return any hits because of the exclamation mark in “Zapata!”. This could be stripped, and there are lots of similar refinements no doubt.
  • More interesting/useful output. The script’s outputs are currently quite raw or basic as I concentrated on the mechanisms for pulling information from free text for automatic catalogue searching. It might be useful to output proper standalone RDF files, references in standard reference formats (e.g. Harvard) in HTML or text files, files in standard reference management formats, perhaps even MARC, and so on. Some of these would perhaps be fairly straightforward.
  • Links to catalogues or online bookshops so you could borrow or buy the books from the reading list based on ISBNs taken from the BNB record.
  • Searching more catalogues. If a search fails on the BNB, the script could search other open catalogues, e.g. the Cambridge catalogue.
  • Greasemonkey script or plugin so that a button appears next to the Further Reading section when you view an In Our Time page. This could even appear next to individual books. Ideally (pie in the sky) such a plugin would have a stab at finding books on any web page.
  • Other ways of firing the script not requiring manual addition to the URL or use of a bookmarklet, e.g. a searchbox of some kind (either accepting a URL as input or keywords for the titles of broadcasts).

Please do leave comments or questions.

ISKO-UK linked data day

On 14 September I went to the ISKO-UK one day conference on Linked Data: the Future of Knowledge Organisation on the Web.  For me, this followed on from a previous Talis session on Linked Data and Libraries I attended at the British Library in June, which I found really very interesting and informative.

The ISKO conference was a lot broader in scope- it was noticed by several speakers discussing the BBC’s use of linked data that there were 22 attendees from the BBC- and  included talks about local and national government, business, libraries, as well as the Ordnance Survey. The following is a brief and personal overview, pausing in more detail over aspects that interested me more. It assumes a passing acquaintance with linked data and basic RDF.

Professor Nigel Shadbolt from the University of Southampton, a colleague of Tim Berners-Lee at Southampton as well as in advising the British Government developing the data.gov.uk site, opened with a talk about Government Linked Data: A Tipping Point for the Semantic Web. There were two interesting points from this (there were many, but you know what I mean). First was the speedy and incredible effects of openly releasing government data. Professor Shadbolt used the analogy of the mapping by John Snow of the 1854 cholera epidemic which identified a pump as the source and led to the realisation that water carried cholera. He mentioned the release of government bike accident data that was little used by the government but which was taken up and used by coders within days to produce maps of accident hotspots in London and guides to avoiding them.
The second point was the notion of the “tipping point” for the semantic web and linked data referred to in the talk’s title. Several speakers and audience members referred to the similar idea of the “killer implementation”, a killer app for the semantic web that would push it into the mainstream. The sheer quantity of data and use it is quickly put to, often beyond the imagination of those who created and initially stored it, was quite compelling. Richard Wallis made a similar point when discussing the relative position of the semantic web compared to the old-fashioned web in the 1990s. He noted that it is now becoming popular to the extent that is nearly impossible to realistically list semantic web sites and predicts that it will explode in the next year or so. Common to both Nigel Shadbolt’s and Richard Wallis’s talks was a feeling almost of evangelism: Richard Wallis explicitly refers to himself as a technology evangelist; Nigel Shadbolt referred to open government data as “a gift”. Despite being relatively long in the tooth, RDF, linked data, and all that have not yet taken off and both seemed keen to push it: when people see the benefits, it won’t fail to take off. There were interesting dissenting voices to this. Martin Hepp, who had spent over eight years coming up with the commercial GoodRelations ontology, was strongly of the opinion that it is not enough to merely convince people of the social or governmental benefits, but rather the linked data community should demonstrate that it can directly help commerce and make money. The fact that GoodRelations apparently accounts for 16% of all RDF triples in existence and is being used by corporations such as BestBuy and O’Reilly (IT publishers) seems to point to a different potential tipping point. Interestingly, Andreas Blumauer in a later talk said that SKOS (an RDF schema to be discussed in the next paragraph) could introduce Web 2.0 mechanisms to the ‘web of data'”. Perhaps, then, SKOS is the killer app for linked data (rather than government data or commercial data as suggested elsewhere), although Andreas Blumauer also agreed with Martin Hepp in saying that “If enterprises are not involved, there is no future for linked data”. In my own ignorant judgement, I would suggest government data is probably a more likely tipping point for linked data, closely followed by Martin Hepp’s commercial data. It is government data that is making people aware of linked data, and especially open data, in the first place. This is more likely to recruit and enthuse. I think the commercial data will be the one that provides the jobs based on the foregoing: it may change the web more profoundly but in ways fewer people will even notice. I suppose it all depends on how you define tipping points or killer apps, which I don’t intend to think about for much longer.

The second talk, and the start of a common theme, was about SKOS and linked data, by Antoine Isaac. This was probably the most relevant talk for librarians and was for me a simple introduction to SKOS, which seems to be an increasingly common acronym. SKOS stands for Simple Knowledge Organisation System and is designed for representing (simply) things like thesauruses* and classification schemes, based around concepts. These concepts have defined properties such as preferred name (“skos:prefLabel”), non-preferred term (“skos:altLabel”), narrower term (“skos:narrower”), broader term (“skos:broader”), and related term (“skos:related”).  The example I’ve been aware of for some time is the representation of Library of Congress Subject Headings (LCSH) in SKOS, where all the SKOS ideas I’ve just mentioned will be recognisable to a lot of librarians. In the LCSH red books, for example, preferred terms are in bold, non-preferred terms not in bold preceded by UF, and the relationships between concepts is represented by the abbreviations NT, BT, and RT. In SKOS, concepts and labels are more clearly distinct. An example of SKOS using abbreviated linked data might be (stolen and adapted from the W3C SKOS primer):

ex:animals rdf:type skos:Concept;
skos:prefLabel “animals”;
skos:altLabel “creatures”;
skos:narrower ex:mammals.

This means that ex:animals is a SKOS concept; that the preferred term for ex:animals is “animals”; a non-preferred term is “creatures”; and, that a narrower concept is ex:mammals. In a mock LCSH setting this might look something like this:

Animals
UF Creatures
NT Mammals

In the LCSH example, however, the distinction between concepts and terms is lost. One aspect of SKOS that Antoine Isaac spent some time on is the idea of equivalent concepts, especially across languages. In RDF you can bind terms to languages using an @ sign, something like this:

ex:animals rdf:type skos:Concept;
skos:prefLabel “animals”@en;
skos:prefLabel “animaux”@fr.

However, you can also link concepts more directly using skos:exactMatch, skos:closeMatch, skos:broadMatch, skos:narrowMatch, and relatedMatch to link thesauruses and schemes together. These are admittedly a bit nebulous. He mentioned work that had been done on linking LCSH to the French Rameau and from there to the German subject thesaurus SWD. For example:

Go to http://lcsubjects.org/subjects/sh85005249 which is the LCSH linked data page for “Animals”. (You can view the raw SKOS RDF using the links at the top right, although sadly not in n3 or turtle format which I have used above). At the bottom of the page there are links to “similar concepts” in other vocabularies, in this case Rameau.
Go the the first one, http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb119328694, and you see the Rameau linked data page for “Animaux”.

In the LCSH RDF you can pick out the following RDF/XML triples:

<rdf:Description rdf:about=”http://lcsubjects.org/subjects/sh85005249#concept”>
<rdf:type rdf:resource=”http://www.w3.org/2004/02/skos/core#Concept”/>
<skos:prefLabel>Animals</skos:prefLabel>
<skos:altLabel xml:lang=”en”>Beasts</skos:altLabel>
<skos:narrower rdf:resource=”http://lcsubjects.org/subjects/sh95005559#concept”/>
<skos:closeMatch rdf:resource=”http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb119328694″/>

which is basically saying the same as (clipping the URIs for the sake of clarity):

lcsh:sh85005249#concept rdf:type skos:Concept;
skos:prefLabel “Animals”@en;
skos:altLabel “Beasts”@en;
skos:narrower lcsh:sh95005559#concept;
skos:closeMatch rameau:cb119328694.

Not too far from the first example I gave, with the addition of  a mapping to a totally different scheme. Or in mock red book format again but with unrepresentable information missing:

Animals
UF Beasts
NT Food animals

Oh that some mapping like this were available to link LCSH and MeSH…!

Several other talks touched on SKOS, such is its impact on knowledge management. Andreas Blumauer talked about it in demonstrating a service provided by punkt. netServices, called PoolParty.** I don’t want to go into depth about it, but it seemed to offer a very quick and easy way to manage a thesaurus of terms without having to deal directly with SKOS or RDF. During the talk, Andraeas Blumauer briefly showed us an ontology based around breweries, then asked for suggestions for local breweries. Consequently, he added information for Fullers and published it right away. To see linked data actually being created and published (if not hand-crafted) was certainly unusual and refreshing. Most of what I’ve read and seen has talked about converting large amounts of data from other sources, such as MARC records, EAD records, Excel files, Access databases, or Wikipedia. I’ve had a go at hand-coding RDF myself, which I intend to write about if/when I ever get this post finished.

I don’t want to go into detail too much about it***, but another SKOS-related talk was the final one from Bernard Vatant who drew on his experience in a multi-national situation in Europe to promote the need for systems such as SKOS to deal more rigorously with terms, as opposed to concepts. Although SKOS would appear to be about terms, in many ways it is not clear on many matters of context. For instance, using skos:altLabel “Beasts” for the concept of Animals as in the examples given above gives no real idea of what the context of the term is. Here is a theoretical made-up example of some potential altLabels for the concept of Animals which I think makes some of the right points:

Animal (a singular)
Beasts (synonym)
Animaux (French term)
Animalia (scientific taxonomic term)

These could all be UF or altLabels but using UF or altLabel gives no idea about the relationship between the terms, and why one term is a non-preferred term. He gave another instance of where this might be important in a multinational and multilingual context, where the rather blunt instrument of adding @en or @fr is not enough, when a term is different in Belgian, French, or Canadian varieties of French. This has obvious parallels in English, where we often bemoan the use of American terms in LCSH. Whether embedded in LCSH or as a separate list, it might be possible to better tailor the catalogue for local conditions if non-preferred terms were given some context. Perhaps “Cellular telephones” could be chosen by a computer to be displayed in a US context, but “mobile telephones” could be chosen in a UK context if the context of those terms were known and specified in the thesaurus.

Moving away from SKOS, Andy Powell talked about the Dublin Core Metadata Initiative (DCMI). I’ll admit I’ve always been slightly confused as to what the precise purpose of Dublic Core (DC) is and how one should go about using it. Andy Powell’s talk explained a lot of this confusion by detailing how much DC had changed and reshaped itself over the years. To be honest, in many ways I found it surprising how it is still active and relevant given the summary I heard. The most interesting part of his talk for me was his description of the mistakes or limitations of the DCMI caused by its library heritage. Another confession- my notes here are awful- but the most important point that stuck out for me was the library use of a record-centric approach, e.g.:

  • each book needs a self-contained record
  • this record has all the details of the author, title, etc.
  • this record is used to ship the record from A to B (e.g. from bibliographic utility to library catalogue),
  • this record also tracks the provenance of the data within the record, such as within the 040 field: it all moves together as one unit.

Contrast this with the sematic web approach where data is carried in triples. A ‘record’, such as an RDF file, might only contain a sameAs triple which relates data about a thing to a data store elsewhere; many triples from multiple sources could be merged together and information about a thing could be enriched or added to. This kind of merging is not particularly easy or encouraged by MARC records (although the RLUK database does something similar and quite tortuously when it deduplicates records). There’s a useful summary of all this at all things cataloged which opens thus:

Despite recent efforts by libraries to release metadata as linked data, library records are still perceived as monolithic entities by many librarians. In order to open library data up to the web and to other communities, though, records should be seen as collections of chunks of data that can be separated / parsed out and modeled. Granted, the way we catalog at the moment makes us hold on to the idea of a “record” because this is how current systems represent metadata to us both on the back- and front-end. However with a bit of abstraction we can see that a library record is essentially nothing but a set of pieces of data.

One problem with the linked data approach though is the issue of provenance which was referred to above as one of the roles the MARC record undertakes (ask OCLC, e.g. http://community.oclc.org/metalogue/archives/2008/11/notes-on-oclcs-updated-record.html). If you take a triple out of its original context or host, how can you tell who created the triple? Is it important? Richard Wallis always makes the point that triples are merely statements: like other web content they are not necessarily true at all. Some uneasiness on the trustworthiness or quality of data turned up at various points during the day. I think it is an interesting issue, not that I know what the answer is, especially when current cataloguing practices largely rely on double checking work that has already been done by other institutions because that work cannot really be trusted. There are other issues and possible solutions that are a little outside my comfort zone at the moment, including excellent buzzwords like named graphs and bounded graphs.

Andy Powell also mentioned, among other things:

  • the “broad semantics” or “fuzzy buckets” of DC which derive in large part from the library catalogue card, where, for instance, “title” or “creator” can mean all sorts of imprecise things;
  • flat world modelling where two records are needed to describe say, a painting and a digital image of the painting. This sounds to me like the kind of thing RDA/FRBR is attempting (awkwardly in my view) to deal with.
  • the use of strings instead of things, such as describing an author as “Shakespeare, William” rather than <http://www.example/authors/data/williamshakespeare>. This mirrors one of the bizarre features of library catalogues where authorities matching is generally done by matching exact strings of characters rather than identifiers. See Karen Coyle for an overview of the headaches involved.

There were three other talks which I don’t propose to go into in much detail. I’ve touched on Richard Wallis’s excellent (and enthusiastic) introduction to the whole idea of linked data and RDF, a version of which I found dangerously intriguing at a previous event given by Talis. He talked about, among other things, the use of the BBC in using linked data to power its wildlife pages (including drawing content from Wikipedia) and World Cup 2010 site; in fact, how linked data is making the BBC look at the whole way it thinks about data.

His other big message for me was to compare the take-up of the web to where the current take of linked data was in order to suggest that we are on the cusp of something big: see above for my discussion of the tipping point.

* I don’t like self-conscious classical plurals where I can help it, not that there’s anything wrong with them as such.
** I can’t help but find this name a little odd, if not actually quite camp. I expect there’s some pun or reference I’m not getting. Anyway. Incidentally, finding information about PoolParty from the punkt. website is not easy, which I find hard to understand given that it is a company wanting to promote its products; and, more specifically, it is a knowledge management (and therefore also information retrieval) company.
*** Partly because I don’t think I could do it justice, partly also because it was the most intellectual talk and took place at the end of the day.

New submission for the Urban Dictionary?

I am considering submitting the following entry to the Urban Dictionary:

1. Had their authority record updated by the Library of Congress.

Euphemism. Died.

Hey, where’s Michael?
Dude! Didn’t you hear? He’s had his authority record updated by the Library of Congress.

As the linked article explains: “Remember the REVISED LCRI 22.17 contains a new option for cataloguers to add death dates to personal name headings with open dates. “

On depressing contents notes

Perhaps the most depressing contents note I’ve come across for a while:

” … disc 5. Loss of a parent in adult life, loss of a partner or spouse and depression & helplessness (57 min.) — disc 6. Anger, aggression & violent deaths and disasters (69 min.) …”

If you must ask, this is from Colin Murray Parkes’s Bereavement, loss & change, 7 DVDs (484 minutes) of grief and depression, or at least how to cope with it, although I confess I haven’t actually actually watched it, so it might in fact be littered with cartoons, quips, good humour, and general gaiety.

More Cataloger’s Desktop comments

The Library of Congress’s Cataloging Distribution Service is doing a survey on the development of its Cataloger’s Desktop, which they are planning to overhaul. They seem keen to rework it for the web rather than replicating the CD product it is based on. I hope they think profoundly about this to make sure it is properly a web-based resource or, as I would prefer, a loose collection of separately accessible resources. Below are the comments I put in answer to one of the earlier questions on general satisfaction:

The content is second to none, but the presentation of the content is appalling:

  1. It is extremely unwieldy: there is no reason to shoehorn everything into one package and one great list. E.g. AACR2 would be better presented as a separate product as it is complex enough as it is. Rather than having shaky preferences, I would like to see separate sites for which I can produce my own list of links, as I do anyway for other sites.
  2. Despite being presented on the web, the site tries its hardest to discard the advantages of the web by imposing its own interface. This is bad practice as it means another interface to learn and is not intuitive (e.g. I cannot use the Back button to go back, or link to a section of a resource). Standard HTML pages are more than up to the job. I don’t think a system like this is very successful if you have to provide training in how to use it: it would be like inventing a different kind of book where you have to train readers in how to turn the pages.
  3. There is no need to have a system which has to find its way round popup-blockers: this just shouldn’t be an issue.
    These factors prevent me from using Cataloger’s Desktop nearly as often as I should. I mostly want it for quick look up of AACR2 and other standards. Instead I often find myself referring to an out-of-date paper copy for simple rules and abbreviations. I was hoping to have weaned myself off it by now.

My previous comments on a similar survey in 2005 are here.

No evidence on bibliographic issues

Lorcan Dempsey makes a much overdue point:

In all the discussion about bibliographic data and catalogs, and about their advantages or disadvantages when compared to other approaches, it is striking how little appeal there is to actual evidence.

I’ve noticed this on email discussion lists where appeals are made to personal experience (of the librarian/cataloguer) and to how a user should use a catalogue, but rarely is this backed up by research as to how library users could use catalogues most intuitively and effectively and how they want to use catalogues to find material. I think this has profound implications for the cataloguing rules and OPAC design.

I expect the framers of RDA are using a wealth of such research data diligently compiled by the researchers at our library schools to compile the rules. With this much academic research behind us, Amazoogle doesn’t stand a chance!