Archives

Categories

RLUK/European Library linked data sample

RLUK and the European Library (of which the RLUK is now a member) have just released 17 million records as linked open data. They have released three sets (via Mike Mertens), for which links to the RDF turtle versions are below:

I’ve tried to have a quick look at the last just to get an idea and I’ve isolated what I think is all the data for one book, chosen at random. The whole block of turtle prefixes from the start of the file are included:


@prefix rdaa: <http://rdaregistry.info/Elements/a/> .
@prefix rdac: <http://rdaregistry.info/Elements/c/> .
@prefix rdae: <http://rdaregistry.info/Elements/e/> .
@prefix rdam: <http://rdaregistry.info/Elements/m/> .
@prefix rdaw: <http://rdaregistry.info/Elements/w/> .
@prefix rdau: <http://rdaregistry.info/Elements/u/> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix edm: <http://www.europeana.eu/schemas/edm/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix frbrer: <http://iflastandards.info/ns/fr/frbr/frbrer/> .
@prefix ore: <http://www.openarchives.org/ore/terms/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix wgs84pos: <http://www.w3.org/2003/01/geo/wgs84_pos#> .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> a dcterms:BibliographicResource ;
      rdam:P30004 "local identifier: http://data.copac.ac.uk/iid/65204626" ;
      rdau:P60049 <http://rdvocab.info/termList/RDAContentType/1020> ;
      rdam:P30003 "single unit"^^<http://rdvocab.info/termList/ModeIssue> ;
      rdau:P60520 "Unkown"@en ;
      rdam:P30004 "isbn: 0198750315" ;
      rdam:P30156 "The philosophy of history" ;
      rdau:P60339 "edited by Patrick Gardiner." ;
      rdam:P30157 "Oxford readings in philosophy" ;
      rdau:P60398 _:node18kdvnimbx4386 .

_:node18kdvnimbx4386 a rdac:C10004 ;
      rdaa:P50111 "Patrick L. Gardiner" ;
      rdaa:P50121 "1922" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> rdau:P60073 "1974" ;
      rdau:P60099 <http://id.loc.gov/vocabulary/iso639-2/eng> ;
      rdau:P60163 _:node18kdvnimbx4387 .

_:node18kdvnimbx4387 rdau:P60366 "Oxford University Press" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> rdau:P60444 _:node18kdvnimbx4388 .

_:node18kdvnimbx4388 a rdac:C10005 ;
      rdaa:P50032 "London" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> rdau:P60163 <http://id.loc.gov/vocabulary/countries/uk> ;
      dcterms:subject _:node18kdvnimbx4389 .

_:node18kdvnimbx4389 a frbrer:C1007 ;
      rdfs:label "History, Philosophy." ;
      dcterms:hasPart _:node18kdvnimbx4390 .

_:node18kdvnimbx4390 a frbrer:C1007 ;
      rdfs:label "History" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> dcterms:extent "224 p. ;" , "21 cm." ;
      rdau:P60470 "Includes index." ;
      dcterms:description "Bibliography: p. [218]-222." .

Some initial observations:

A short snippet from another book showing a blank node asserted as being the same as a VIAF entity, having a relationship with a work using RDA, and the detailed RDA data elements for the name:


_:node18kdvnimbx245 owl:sameAs <http://viaf.org/viaf/17463572/> .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000087185802> rdau:P60398 _:node18kdvnimbx245 .

_:node18kdvnimbx245 a rdac:C10004 ;
      rdaa:P50111 "Niccolo Pagliarini" ;
      rdaa:P50121 "1717" ;
      rdaa:P50120 "1795" .

One record in lots of data formats

For a Dev8d session I did with Owen Stephens in February I presented data for a single book and followed how it had changed as standards changed, trying above to explain to non-cataloguers why catalogue records look and work the way they do. At least one person found it useful. I am now drafting an internal session at work on the future of cataloguing and am planning to take a similar approach to briefly explain how we got to AARC2 and MARC21, and where we are heading. I took the example I used at Dev8d and hand-crafted some RDA examples, obtained a raw .mrc MARC21 file, and used the RDF from Worldcat to come up with a linked data example.

I have tried to avoid notes on the examples themselves. However, do note the following: the examples only generally use the same simple set of data elements, basically the bits you might find on a basic catalogue card: no subjects, few notes, etc.; the book is quite old so there is no ISBN anyway. The original index card is from our digitised card catalogue. The linked data example was compiled by copying the RDFa from the Worldcat page for the book; this was then put into this RDFa viewer (suggested by Manu Sporny) to extract the raw RDF/Turtle; I manually hacked this further to replace full URIs with prefixes as much as possible in an attempt to make it more readable (I suspect this is where some errors may have crept in). The example itself is of course a conversion from an AARC2/MARC21 record. C.M. Berners-Lee is Tim’s dad.

Feel free to use this and to point out mistakes. I would particularly welcome anyone spotting anything amiss in the RDA and linked data, where I am sure I have mangled the punctuation in both.

Harvard Citation

Berners-Lee, C.M. (ed.) 1965, Models For Decision: a Conference under the Auspices of the United Kingdom Automation Council Organised by the British Computer Society and the Operational Research Society, English Universities Press, London.

Pre-AACR2 on Index Card

BERNERS-LEE, C.M., [ed.].

Models for decision; a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society.

London, 1965.

x, 149p. illus. 22cm.

AACR2 on Index Card

Models for decision : a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society / edited by C.M. Berners-Lee. -- London : English Universities Press, 1965.

x, 149 p. : ill. ; 23 cm.

Includes bibliographical references.

-       Berners-Lee, C. M.

AACR2 in MARC21 (raw .mrc)

00788nam a2200181 a 4500001002700000005001700027008004100044024001500085245021000100260004900310300003200359504004100391650003300432700002300465710003900488710003000527710004900557_UCL01000000000000000477125_20061112120300.0_850710s1965    enka     b    000 0 eng  _8 _ax280050495_00_aModels for decision :_ba conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society /_cedited by C.M. Berners-Lee._  _aLondon :_bEnglish Universities Press,_c1965._  _ax, 149 p. :_bill. ;_c23 cm._  _aIncludes bibliographical references._ 0_aDecision making_vCongresses._1 _aBerners-Lee, C. M._2 _aUnited Kingdom Automation Council._2 _aBritish Computer Society._2 _aOperational Research Society (Great Britain)__

AACR2 in MARC21

245 00 $a Models for decision :
$b a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society /
$c edited by C.M. Berners-Lee.
260 __ $a London :
$b English Universities Press,
$c 1965.
300 __ $a x, 149 p. :
$b ill. ;
$c 23 cm.
504 __ $a Includes bibliographical references.
700 1_ $a Berners-Lee, C. M.

RDA

Title proper Models for decision
Other title information a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society
Statement of responsibility relating to title proper edited by C.M. Berners-Lee
Place of publication London
Publisher’s name The English Universities Press Limited
Date of publication 1965
Copyright date ©1965
Media type unmediated
Carrier type volume
Extent x, 149 pages
Dimensions 23 cm
Content type text
Illustrative content Illustrations
Supplementary content Includes bibliographical references.
Contributor Berners-Lee, C. M.
Relationship designator editor of compilation

RDA in MARC21

245 00 $a Models for decision :
$b a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society /
$c edited by C.M. Berners-Lee.
264 _1 $a London :
$b The English Universities Press Limited,
$c 1965.
264 _4 $c ©1965
300 __ $a x, 149 pages :
$b illustrations ;
$c 23 cm.
336 __ $a text
$2 rdacontent
337 __ $a unmediated
$2 rdamedia
338 __ $a volume
$2 rdacarrier
504 __ $a Includes bibliographical references.
700 1_ $a Berners-Lee, C. M.,
editor of compilation.

Linked data


@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix schema: <http://schema.org/> .
@prefix worldcat: <http://www.worldcat.org/oclc/> .
@prefix library: <http://purl.org/library/> .
@prefix viaf: <http://viaf.org/viaf/> .
@prefix lc_authorities: <http://id.loc.gov/authorities/names/> .
@prefix mads: <http://www.loc.gov/mads/rdf/v1#> .

worldcat:221944758
  rdf:type schema:Book;
  library:oclcnum "221944758";
  schema:name "Models for decision : a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society";
  library:placeOfPublication _:1;
  schema:publisher _:4 .
  schema:datePublished "[1965]";
  schema:numberOfPages "149";
  schema:contributor viaf:149407214;
  schema:contributor viaf:130073090;
  schema:contributor viaf:137135158;
  schema:contributor viaf:36887201;
_:1
  rdf:type schema:Place;
  schema:name "London :" .
_:4
  rdf:type schema:Organization;
  schema:name "English Universities Press" .
viaf:149407214
  rdf:type schema:Organization;
  madsrdf:isIdentifiedByAuthority lc_authorities:n79056431;
  schema:name "British Computer Society." .
viaf:130073090
  rdf:type schema:Organization;
  madsrdf:isIdentifiedByAuthority lc_authorities:n85076053;
  schema:name "Operational Research Society." .
viaf:137135158
  rdf:type schema:Organization;
  madsrdf:isIdentifiedByAuthority lc_authorities:n79063901;
  schema:name "Institution of Electrical Engineers." .
viaf:36887201
  rdf:type schema:Person;
  schema:name "Berners-Lee, C. M." .

Lodopac example searches

Yay, my entry for the Discovery & DevCSI Developers Competition- Lodopac- was awarded a commendation for its use of the Cambridge University Library (CUL) dataset. During the judging I was asked for searches which were known to work well- the timeout issues I discussed under Limitations being not insignificant, especially with author or title searches. I submitted a version of the following brief general notes which I hope are helpful to anyone else who wants to play:

The British National Bibliography (BNB) server is generally more responsive than the Cambridge University Library one; title seems to work better than author. The following are hopefully useful examples useful:

I would really like to try and think of ways of improving free text regular expression search times for things like author and title in Sparql* although I doubt there is one that doesn’t rely on the configuration, processing power, or indexing of the server being searched.

* thinking aloud, some ideas might include: downloading a larger imprecise set for further local searching (e.g. for an author/title search downloading the title matches and searching the authors locally: although this would also be slow, it would get round the timeout at least); forcing a look-up in a controlled vocab first in order to get an exact string match (esp for authors, although even if this is possible, this forces a user to do more work, which isn’t the point);  local indexing of the triple store (this is probably the best way but I’m not sure how to go about it, whether I really have the server capabilities to do it, and can be committed to the updating required).

Sparql recipes for bibliographic data

One of the difficulties in searching RDF data is knowing what the data looks like. For instance, finding a book by its title means knowing something about what how a dataset has recorded the relationship between a book and its title. There is no real standard for publishing MARC/AACR2-style bibliographic data as RDF: it seems libraries publishing RDF are approaching this largely individually, although they are using many of the same vocabularies, dc, bibo, etc. This was one reason why I wanted to create Lodopac: to present some kind of interface so that searchers didn’t need to know these different models but could start to explore them. Below are the Sparql recipes for the different search criteria I used for the BNB and the Cambridge University Library datasets, so they can be compared, re-used, or corrected. All examples use prefixes, which are defined anew in each example. The examples are of course fragments and don’t have all the necessary SELECT and WHERE clauses.

By the way, for an excellent Sparql tutorial with ample opportunity to play as you go along, do have a look at the Cambridge University Library’s SPARQL tutorial. It also gives clues to the way their data is structured. Of use for the BNB is their data model (PDF), which is not nearly as scary as it looks at first, and incredibly helpful.

Author keyword search

This would be relatively straightforward-the unavoidable regular expression being the main complication- but for the fact that the traditional author/editor/etc of bibliographic records can be found in dc:creator as well as dc:contributor which necessitates a UNION. The BNB used foaf:name:

PREFIX dc: <http://purl.org/dc/terms/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>

{?book dc:creator ?author} UNION {?book dc:contributor ?author} .
?author foaf:name ?name .
FILTER regex(?name, “author”, “i”) .

Cambridge uses much the same recipe except that it uses rdfs:label instead of foaf:name:

PREFIX dc: <http://purl.org/dc/terms/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

{?book dc:creator ?author} UNION {?book dc:contributor ?author} .
?author rdfs:label ?name .
FILTER regex(?name, “author”, “i”) .

Title keyword searches

This is more straightforward and is in fact the same for both the BNB and Cambridge University Library:

PREFIX dc: <http://purl.org/dc/terms/>

?book dc:title ?title .
FILTER regex(?title, “title”, “i”) .

Date of publication (year)

I imagined this one being simple and for Cambridge University Library it is. However the BNB took some unravelling as they have modelled publication as an event related to a book. The various elements of publication are then related to the event. So, for the BNB we have this:

PREFIX bibliographic: <http://data.bl.uk/schema/bibliographic#>
PREFIX event: <http://purl.org/NET/c4dm/event.owl#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

?book bibliographic:publication ?pub .
?pub event:time ?year .
?year rdfs:label “date” .

By contrast, Cambridge University Library has it in one line:

PREFIX dc: <http://purl.org/dc/terms/>

?book dc:created “date” .

ISBN

As an identifier, ISBN is relatively straightforward in both models, although care must be taken with the BNB as 10 and 13 digit ISBNs are treated as separate properties and the following assumes that the search will cover both:

PREFIX bibo: <http://purl.org/ontology/bibo/>

{?book bibo:isbn10 “isbn”} UNION {?book bibo:isbn13 “isbn”} .

For Cambridge University Library, also using the bibo ontology, this is:

PREFIX bibo: <http://purl.org/ontology/bibo/>

?book bibo:isbn “isbn” .

Conclusion

I didn’t set up to provide ground-breaking conclusions. However, it is remarkable how different data models can be formulated for modelling the same type of data by similar organisations. The real question is whether this is a good, a bad thing, or doesn’t really mattter. Will it need to be standardised? My understanding of how this works is probably not. I think the days of monolithic library standards are probably now gone. I wonder, for instance, if there ever will be a single MARC22 (or whatever you like to call it) and doubt RDA will ever completely replace AACR2 in the way we imagine. What will emerge I suspect will be a soup of various standards and data models, some of which will be more prevalent. One thing I picked up from various linked data talks is that information has frequently been published then re-used in ways that the issuers never imagined; if that is the case, the precise modelling and format is probably not as important as the fact that it is of good quality and intelligently put together. The BNB and Cambridge University Library models are clearly quite different but quite capable of being mapped and used despite this.

If there are any other bibliographic data Sparql endpoints, I would like to include them in a future version of the Lodopac search. Do let me know if you come across them.

More mundanely, do say if there are errors in my Sparql recipes or if there are ways they could be done more efficiently.

Lodopac : simple Linked Open Data OPAC

Lodopac is my entry to the UK Discovery Developer Competition. Aside from obvious mocking of the name, comments on Lodopac are very welcome. If anyone  installs it locally, I’d also be very interested to know.

About Lodopac

Lodopac is a simple linked open data opac using Sparql to search remote bibliographic RDF data. By default it is set up to search the BNB and Cambridge University Library datasets, but is designed to allow easy setup of additional datasets with Sparql endpoints (see Installation, source code, and configuration below). It was written in response to the UK Discovery Developer Competition.

The purpose of Lodopac is to provide a simple standard OPAC-style interface to perform searches of various bibliographic RDF datasets without having to know how to formulate a Sparql query and without having to know the structure of the database. I hope it is especially useful for people wanting to get a grip on how bibliographic RDF is put together, what it looks like, and what a Sparql query looks like. For example, an author search is possible without knowing about dc:creator and dc:contributor, or how these need to be linked together in a Sparql search. Similarly, a searcher wouldn’t need to worry about how to construct date searches in different datasets. For the BNB and CUL, these are very different (three lines in the BNB, one for CUL), but in Lodopac, there is only search box to search both. Lodopac displays the Sparql query it constructed to perform the search, as well as the combined RDF for all results found in XML, JSON, N3, and TTL.

How to search Lodopac

Select one or more of the available datasets using the checkboxes.

Author and Title searches are free text phrase searches. In other words, a string you search for will match with any exact match, including spacing and punctuation, and in the middle of words. E.g. searching for “shake” will match “Shakespeare”, “milkshakes”, and “More hits that you can shake a stick at”. Searching is case insensitive. The following punctuation is removed from searches: \”‘<>$^%.

You are strongly advised to keep author and title searches simple: e.g. one word of a title or a surname only.

ISBN searches 10 or 13 digit ISBNs. Any dashes or other non-digit characters are stripped from the search.

Date search will accept a year.

N.B. Keep searches as simple as possible, especially with author and title searches, to avoid them timing out. ISBN and date searches are generally quicker.

Limitations

A bad workman blames his tools and I’m no exception. The greatest limitation is the time taken by Sparql endpoints to perform a Sparql query, especially one that involves a regular expression, such as the Author and Title searches. What is needed is some more robust indexing or some cheat like Virtuoso’s unorthodox bif:contains, which the old version of the linked data BNB used. I touched on this in a blog post about the In Our Time Booklist script I wrote (see section 6).

The load and current capacity on the Sparql endpoints at the time the query is made is another important factor. A search which times out one minute can work fine the next.

The search options are obviously limited but do I hope represent the most common methods of searching normal library catalogues aside from, of course, a general keywords search. The manipulation of results is also rather sparse but allows click through to full data associated with a book, the structure and contents of which can be more fully explored. The aggregation of RDF data in various formats is I hope useful illustratively as well as having potential for further manipulation.

Installation, source code, and configuration

The source code for Lodopac is available as a zip file, which contains all the necessary PHP, Javascript, and CSS files. In addition, you will need to install ARC2, which makes the Sparql queries and manipulates the resultant RDF. Edit the first line of lodopac.php so that it points at your local installation of ARC2.

The programme is basically one long script- there is only one page- but is split for convenience of editing. The key file is lodopac.php which includes the other files as it goes along. The main core of the script which builds the queries and does the searching is all in lodopac.php.

I have attempted to make the script as easily configurable as possible so that additional Sparql endpoints can be added. There are probably more components hard-coded into the script that I have overlooked, but all the setup for the endpoints is in the file setup_endpoints.php. The first part of this file is a list of necessary prefixes that are needed for any possible queries from any of the endpoints and, although not ideal, all these prefixes are sent with any Sparql query. Following that and the declaration of an array of the endpoints, each endpoint has a dedicated block with the information added to a hash. To add another endpoint, duplicate a block and configure the search recipes as appropriate. The keys marked “brief_” are used to fetch information for the brief results display. I have conspicuously chickened out of providing an author and the attendant main entry and multiple author headaches involved.

Sketch for Eurovision linked data

I’ve been looking around to see if Eurovision data exists as linked data, or openly in any format. I can find relatively little, mostly just data about one of the recent contests, or about UK performances. None of it is RDF or linked. I suspect that some linked but not open data of this sort may be held by the BBC or the EBU, but can’t see any trace of it.

In some ways I am not disappointed as it means I can have a go at playing with the idea myself, both for fun and as something to learn from. The idea would be to do something similar to what I have done with this Sandy site, preferably also with a Sparql endpoint if I can figure out how to set one up. It would have the twin benefits of being a larger pool of data than I entered in the Sandy site but also reasonably finite: there are only so many contests and so many entrants each year. It would also be of interest to a somewhat wider group of people.

There are two main differences that make creating the new site more tricky: the data needs to be modelled a little more rigorously; and, I am looking at creating my own elements and properties, although I will use existing ones where possible. Below is a sketch of a simple Eurovision Song Contest linked data (escld:) model with the data types I would need to establish. My main aim is to record songs, artists, countries, positions, and total points. Links out can enrich this too. Recording the individual voting scores (who gave douze to whom) is not my immediate concern although I wouldn’t want to rule it out. I’ve therefore had a stab at how it could be done. I think it would need something clever like blank nodes. The purpose of the sketch is to get an idea of structure rather anything like an  exhaustive list of possible relationships to be included.

Any comments or suggestions on this set up would be most welcome!

Entities:
<contest>
<country>
<song>
<artist>
<composer>

Relationships:
<inCity>
<performedBy>
<composedBy>
<representing>
<position>
<score>
<receivedPoints>
<pointsFrom>
<noOfPoints>

Example triples (lazily punctuated and laid out) using namespace escld

<escld:Eurovision Song Contest 2010>
  <dc:date> "2010"
  <escld:inCity> <dbpedia:Oslo>
<escld:Germany>
<escld:Satellite>
  <escld:performedBy> <Lena Meyer-Landrut>
  <escld:composedBy> <Julie Frost>
  <escld:composedBy> <John Gordon>
  <dc:date> "2010"
  <escld:representing> <Germany>
  <dc:title> "Satellite"
  <escld:position> "1"
  <escld:score> "246"
  <escld:receivedPoints> _:rP1
    _:rP1 <escld:pointsFrom> <UnitedKingdom>
    _:rP1 <escld:noOfPoints> "12"
<escld:Lena Meyer-Landrut>
  <foaf:name> "Lena Meyer-Landrut"

<escld:Julie Frost>
  <foaf:name> "Julie Frost"

<escld:John Gordon>
  <foaf:name> "John Gordon"

In Our Time booklist

I have written a script which takes an unstructured reading list on the BBC’s In Our Time website, searches the British National Bibliography (BNB) using bibliographica for the books on the list, and returns structured metadata for the records it found.

This script was written in response to an idea raised by psychemedia for the Open Bibliographic Data Challenge: the BBC “In Our Time” Reading List:

The BBC “In Our Time” radio programme publishes suggested recommending reading in the programme data in an unstructured and citation style way: author, title (publisher, year), with what looks to be conventional character string separators between references (at least on the pages I looked at).

The idea is to extract and link suggested readings for the In Our time programmes to open, structured bibliographic data. This would make the In Our Time archive even more useful as an informal (open-ish) educational resource, especially as academic libraries start to release data relating to books used on courses. (So for example, this approach might help provide a link from a course to a relevant In Our Time broadcast via a common book.)

I was drawn to this idea as I like the idea of turning unstructured data into structured data: I have for example had some previous fun converting HTML pages into RSS feeds (e.g. CILIP Lisjobnet, Big Brother). I think something similar for any reading list (e.g. a Word document produced by a lecturer) would be an interesting idea.

The programme is written in PHP and is designed to be fired from a Javascript bookmarklet from a page on the In Our Time site, or by appending the In Our Time URL to the end of the URL for this page: http://www.aurochs.org/inourtime_booklist/inourtime_booklist_v1.php?. For example, to use it on the page for The Mexican Revolution (which I used a lot in testing), add the URL http://www.bbc.co.uk/programmes/b00xhz8d to produce http://www.aurochs.org/inourtime_booklist/inourtime_booklist_v1.php?http://www.bbc.co.uk/programmes/b00xhz8d.

The script follows the following steps:

  1. Set up ARC2 to enable SPARQL searching and RDF processing
  2. Extract Further Reading section
  3. Separate out Raw Data for each book
  4. Determine pattern used in citation then extract Basic Data, e.g. author, title, article, publication, using regular expressions
  5. Further refine elements to make searching easier, i.e. one surname for author, only title proper for titles
  6. Construct a SPARQL Query using author surname and title regular expressions pre-filtered for speed by a significant word using bif:contains
  7. Filter Hits by date of publication
  8. Obtain and display metadata from BNB
More details of these steps are below:

1. Set up ARC2 to enable SPARQL searching and RDF processing

ARC2 is a simple-to-use system for using RDF and SPARQL in PHP. I had previously played with it here when experimenting with creating my own RDF. The Sandy site uses SPARQL to populate the See Also sections.

2. Extract Further Reading section

A simple regular expression identifies the div in the HTML code that contains the reading list, which enables the next stage of the script to look for individual books.

3. Separate out Raw Data for each book

Another regular expression pulls out the paragraphs containing books and puts them in an array. You can see this by viewing the Raw Data.
4. Determine pattern used in citation then extract Basic Data, e.g. author, title, article, publication using regular expressions

As the In Our Time site does not use a single standard form of citation, the script has to try and determine which of several possible patterns a citation is using with regular expressions and extract the correct bits of data. This only works as well as it is possible to identify all the patterns, which effectively means looking at as many In Our Time pages as possible. This is one area that would certainly reward more work. It also points out how difficult it would be to extrapolate this into a script  that could read any citations. The In Our Time booklist currently uses five citations each identified with a number (1, 2, 3, 4, 15, 5). If you look at the Book Data for a particular book you will see the citation style number given. The regular expressions capture author, title, and publication.
The author information in citations on In Our Time is  unpredictable. Sometimes surnames are first, sometimes they are last.  The citation patterns take care of this if possible and try to extract one significant name.
5. Further refine elements to make searching easier, i.e. one surname for author, only title proper for titles.

The script removes things like “(ed.)” from the author, which would obviously throw off a catalogue search. Subtitles- everything after and including semi-colons- are also removed from titles to lessen any chance of variation and lost matches.

6. Construct a SPARQL Query using author surname and title regular expressions pre-filtered for speed by a significant word using bif:contains

Constructing the SPARQL query was the most tricky part. Ignoring the various standard prefixes pilfered from the standard example, the most important part is the title search. There are three unsatisfactory options:
  • Match the extracted title directly to a dc:title. This doesn’t work as the cited title is unlikely to be exactly the same in all matters of words included, spacing, punctuation, etc.
  • Use bif:contains for keyword searching as used in the BNB SPARQL example. This is certainly quick, but has a number of drawbacks: it can only be used once for a single keyword (any one of the two significant words in The history of Mexico, for example, will produce a huge number of hits). It is also not standard SPARQL. I was happy to overlook this if it worked, but ARC2 didn’t like it at all until I worked out it has to be used in angle brackets e.g. ?title <bif:contains> “Mexico”.
  • Use regular expressions (e.g. FILTER regex (?title, “The History of Mexico”, “i”)) for keyword searching. This is extremely powerful: you can easily construct searches but it is so slow as to routinely time out, so rendering it effectively useless.
The In Our Time script uses a combination of the last two techniques to get a result. First, it finds a significant word in the title, ideally the first four letter word after the first word (i.e. to avoid “The”, “A”, “That”, etc.) or, failing that, the first word. The SPARQL query then uses bif:contains to search for that word. The query then does a regular expression filter on the whole title. I don’t know if this is how SPARQL endpoints would generally work, but the BNB appears to only look for regular expression matches on the records already filtered by the bif:contains. In any case, it works.
In addition, the script also uses a regular expression to search by the author’s surname. It doesn’t search by date as the date of publication on BNB (dc:issued) is not in a standard format (e.g. “1994-01-01 00:00:00″, “2005 printing”, “c2006″). It is also not keyword searchable. You can see all the author-title hits with links to BNB records by viewing Hits.
7. Filter Hits by date of publication

You can, however, retrieve the date from the BNB and process it afterwards, which is what the script does. It finds the four digit year and compares this to the four digit year it found on the In Our Time site. You can see all the author-title-date hits with links to BNB records by viewing Date Hits. Perhaps rather arbitrarily, the first book in the resulting array is selected as the result.

8. Obtain and display metadata from BNB

When the search took place, the matching title, author (only one), and date is obtained from BNB. This title and author are displayed, as are the stripped down year of publication, and a link to the full BNB record. For records that returned no hits on the BNB, the Basic Data is simply regurgitated.
The script also downloads the full combined RDF for all the hits is displayed at the bottom of the page, viewable in a several formats.
Further work

I think a lot more work could be done on this given time, both to improve it and to extend it. In no particular order:
  • Make it more pretty. It is currently designed to look merely acceptable while I concentrated on functionality. I have also tried to show much of my working, which a finished version would obviously hide.
  • Refinement of the detection of citation style. This is probably the most critical improvement, and ultimately decides if this approach would be useful outside of In Our Time on other reading lists. There are more patterns that need to be added, especially for older pages.
  • Further preparation of data for searching. Currently, for example, a book on the Mexico reading list doesn’t return any hits because of the exclamation mark in “Zapata!”. This could be stripped, and there are lots of similar refinements no doubt.
  • More interesting/useful output. The script’s outputs are currently quite raw or basic as I concentrated on the mechanisms for pulling information from free text for automatic catalogue searching. It might be useful to output proper standalone RDF files, references in standard reference formats (e.g. Harvard) in HTML or text files, files in standard reference management formats, perhaps even MARC, and so on. Some of these would perhaps be fairly straightforward.
  • Links to catalogues or online bookshops so you could borrow or buy the books from the reading list based on ISBNs taken from the BNB record.
  • Searching more catalogues. If a search fails on the BNB, the script could search other open catalogues, e.g. the Cambridge catalogue.
  • Greasemonkey script or plugin so that a button appears next to the Further Reading section when you view an In Our Time page. This could even appear next to individual books. Ideally (pie in the sky) such a plugin would have a stab at finding books on any web page.
  • Other ways of firing the script not requiring manual addition to the URL or use of a bookmarklet, e.g. a searchbox of some kind (either accepting a URL as input or keywords for the titles of broadcasts).

Please do leave comments or questions.

ISKO-UK linked data day

On 14 September I went to the ISKO-UK one day conference on Linked Data: the Future of Knowledge Organisation on the Web.  For me, this followed on from a previous Talis session on Linked Data and Libraries I attended at the British Library in June, which I found really very interesting and informative.

The ISKO conference was a lot broader in scope- it was noticed by several speakers discussing the BBC’s use of linked data that there were 22 attendees from the BBC- and  included talks about local and national government, business, libraries, as well as the Ordnance Survey. The following is a brief and personal overview, pausing in more detail over aspects that interested me more. It assumes a passing acquaintance with linked data and basic RDF.

Professor Nigel Shadbolt from the University of Southampton, a colleague of Tim Berners-Lee at Southampton as well as in advising the British Government developing the data.gov.uk site, opened with a talk about Government Linked Data: A Tipping Point for the Semantic Web. There were two interesting points from this (there were many, but you know what I mean). First was the speedy and incredible effects of openly releasing government data. Professor Shadbolt used the analogy of the mapping by John Snow of the 1854 cholera epidemic which identified a pump as the source and led to the realisation that water carried cholera. He mentioned the release of government bike accident data that was little used by the government but which was taken up and used by coders within days to produce maps of accident hotspots in London and guides to avoiding them.
The second point was the notion of the “tipping point” for the semantic web and linked data referred to in the talk’s title. Several speakers and audience members referred to the similar idea of the “killer implementation”, a killer app for the semantic web that would push it into the mainstream. The sheer quantity of data and use it is quickly put to, often beyond the imagination of those who created and initially stored it, was quite compelling. Richard Wallis made a similar point when discussing the relative position of the semantic web compared to the old-fashioned web in the 1990s. He noted that it is now becoming popular to the extent that is nearly impossible to realistically list semantic web sites and predicts that it will explode in the next year or so. Common to both Nigel Shadbolt’s and Richard Wallis’s talks was a feeling almost of evangelism: Richard Wallis explicitly refers to himself as a technology evangelist; Nigel Shadbolt referred to open government data as “a gift”. Despite being relatively long in the tooth, RDF, linked data, and all that have not yet taken off and both seemed keen to push it: when people see the benefits, it won’t fail to take off. There were interesting dissenting voices to this. Martin Hepp, who had spent over eight years coming up with the commercial GoodRelations ontology, was strongly of the opinion that it is not enough to merely convince people of the social or governmental benefits, but rather the linked data community should demonstrate that it can directly help commerce and make money. The fact that GoodRelations apparently accounts for 16% of all RDF triples in existence and is being used by corporations such as BestBuy and O’Reilly (IT publishers) seems to point to a different potential tipping point. Interestingly, Andreas Blumauer in a later talk said that SKOS (an RDF schema to be discussed in the next paragraph) could introduce Web 2.0 mechanisms to the ‘web of data’”. Perhaps, then, SKOS is the killer app for linked data (rather than government data or commercial data as suggested elsewhere), although Andreas Blumauer also agreed with Martin Hepp in saying that “If enterprises are not involved, there is no future for linked data”. In my own ignorant judgement, I would suggest government data is probably a more likely tipping point for linked data, closely followed by Martin Hepp’s commercial data. It is government data that is making people aware of linked data, and especially open data, in the first place. This is more likely to recruit and enthuse. I think the commercial data will be the one that provides the jobs based on the foregoing: it may change the web more profoundly but in ways fewer people will even notice. I suppose it all depends on how you define tipping points or killer apps, which I don’t intend to think about for much longer.

The second talk, and the start of a common theme, was about SKOS and linked data, by Antoine Isaac. This was probably the most relevant talk for librarians and was for me a simple introduction to SKOS, which seems to be an increasingly common acronym. SKOS stands for Simple Knowledge Organisation System and is designed for representing (simply) things like thesauruses* and classification schemes, based around concepts. These concepts have defined properties such as preferred name (“skos:prefLabel”), non-preferred term (“skos:altLabel”), narrower term (“skos:narrower”), broader term (“skos:broader”), and related term (“skos:related”).  The example I’ve been aware of for some time is the representation of Library of Congress Subject Headings (LCSH) in SKOS, where all the SKOS ideas I’ve just mentioned will be recognisable to a lot of librarians. In the LCSH red books, for example, preferred terms are in bold, non-preferred terms not in bold preceded by UF, and the relationships between concepts is represented by the abbreviations NT, BT, and RT. In SKOS, concepts and labels are more clearly distinct. An example of SKOS using abbreviated linked data might be (stolen and adapted from the W3C SKOS primer):

ex:animals rdf:type skos:Concept;
skos:prefLabel “animals”;
skos:altLabel “creatures”;
skos:narrower ex:mammals.

This means that ex:animals is a SKOS concept; that the preferred term for ex:animals is “animals”; a non-preferred term is “creatures”; and, that a narrower concept is ex:mammals. In a mock LCSH setting this might look something like this:

Animals
UF Creatures
NT Mammals

In the LCSH example, however, the distinction between concepts and terms is lost. One aspect of SKOS that Antoine Isaac spent some time on is the idea of equivalent concepts, especially across languages. In RDF you can bind terms to languages using an @ sign, something like this:

ex:animals rdf:type skos:Concept;
skos:prefLabel “animals”@en;
skos:prefLabel “animaux”@fr.

However, you can also link concepts more directly using skos:exactMatch, skos:closeMatch, skos:broadMatch, skos:narrowMatch, and relatedMatch to link thesauruses and schemes together. These are admittedly a bit nebulous. He mentioned work that had been done on linking LCSH to the French Rameau and from there to the German subject thesaurus SWD. For example:

Go to http://lcsubjects.org/subjects/sh85005249 which is the LCSH linked data page for “Animals”. (You can view the raw SKOS RDF using the links at the top right, although sadly not in n3 or turtle format which I have used above). At the bottom of the page there are links to “similar concepts” in other vocabularies, in this case Rameau.
Go the the first one, http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb119328694, and you see the Rameau linked data page for “Animaux”.

In the LCSH RDF you can pick out the following RDF/XML triples:

<rdf:Description rdf:about=”http://lcsubjects.org/subjects/sh85005249#concept”>
<rdf:type rdf:resource=”http://www.w3.org/2004/02/skos/core#Concept”/>
<skos:prefLabel>Animals</skos:prefLabel>
<skos:altLabel xml:lang=”en”>Beasts</skos:altLabel>
<skos:narrower rdf:resource=”http://lcsubjects.org/subjects/sh95005559#concept”/>
<skos:closeMatch rdf:resource=”http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb119328694″/>

which is basically saying the same as (clipping the URIs for the sake of clarity):

lcsh:sh85005249#concept rdf:type skos:Concept;
skos:prefLabel “Animals”@en;
skos:altLabel “Beasts”@en;
skos:narrower lcsh:sh95005559#concept;
skos:closeMatch rameau:cb119328694.

Not too far from the first example I gave, with the addition of  a mapping to a totally different scheme. Or in mock red book format again but with unrepresentable information missing:

Animals
UF Beasts
NT Food animals

Oh that some mapping like this were available to link LCSH and MeSH…!

Several other talks touched on SKOS, such is its impact on knowledge management. Andreas Blumauer talked about it in demonstrating a service provided by punkt. netServices, called PoolParty.** I don’t want to go into depth about it, but it seemed to offer a very quick and easy way to manage a thesaurus of terms without having to deal directly with SKOS or RDF. During the talk, Andraeas Blumauer briefly showed us an ontology based around breweries, then asked for suggestions for local breweries. Consequently, he added information for Fullers and published it right away. To see linked data actually being created and published (if not hand-crafted) was certainly unusual and refreshing. Most of what I’ve read and seen has talked about converting large amounts of data from other sources, such as MARC records, EAD records, Excel files, Access databases, or Wikipedia. I’ve had a go at hand-coding RDF myself, which I intend to write about if/when I ever get this post finished.

I don’t want to go into detail too much about it***, but another SKOS-related talk was the final one from Bernard Vatant who drew on his experience in a multi-national situation in Europe to promote the need for systems such as SKOS to deal more rigorously with terms, as opposed to concepts. Although SKOS would appear to be about terms, in many ways it is not clear on many matters of context. For instance, using skos:altLabel “Beasts” for the concept of Animals as in the examples given above gives no real idea of what the context of the term is. Here is a theoretical made-up example of some potential altLabels for the concept of Animals which I think makes some of the right points:

Animal (a singular)
Beasts (synonym)
Animaux (French term)
Animalia (scientific taxonomic term)

These could all be UF or altLabels but using UF or altLabel gives no idea about the relationship between the terms, and why one term is a non-preferred term. He gave another instance of where this might be important in a multinational and multilingual context, where the rather blunt instrument of adding @en or @fr is not enough, when a term is different in Belgian, French, or Canadian varieties of French. This has obvious parallels in English, where we often bemoan the use of American terms in LCSH. Whether embedded in LCSH or as a separate list, it might be possible to better tailor the catalogue for local conditions if non-preferred terms were given some context. Perhaps “Cellular telephones” could be chosen by a computer to be displayed in a US context, but “mobile telephones” could be chosen in a UK context if the context of those terms were known and specified in the thesaurus.

Moving away from SKOS, Andy Powell talked about the Dublin Core Metadata Initiative (DCMI). I’ll admit I’ve always been slightly confused as to what the precise purpose of Dublic Core (DC) is and how one should go about using it. Andy Powell’s talk explained a lot of this confusion by detailing how much DC had changed and reshaped itself over the years. To be honest, in many ways I found it surprising how it is still active and relevant given the summary I heard. The most interesting part of his talk for me was his description of the mistakes or limitations of the DCMI caused by its library heritage. Another confession- my notes here are awful- but the most important point that stuck out for me was the library use of a record-centric approach, e.g.:

  • each book needs a self-contained record
  • this record has all the details of the author, title, etc.
  • this record is used to ship the record from A to B (e.g. from bibliographic utility to library catalogue),
  • this record also tracks the provenance of the data within the record, such as within the 040 field: it all moves together as one unit.

Contrast this with the sematic web approach where data is carried in triples. A ‘record’, such as an RDF file, might only contain a sameAs triple which relates data about a thing to a data store elsewhere; many triples from multiple sources could be merged together and information about a thing could be enriched or added to. This kind of merging is not particularly easy or encouraged by MARC records (although the RLUK database does something similar and quite tortuously when it deduplicates records). There’s a useful summary of all this at all things cataloged which opens thus:

Despite recent efforts by libraries to release metadata as linked data, library records are still perceived as monolithic entities by many librarians. In order to open library data up to the web and to other communities, though, records should be seen as collections of chunks of data that can be separated / parsed out and modeled. Granted, the way we catalog at the moment makes us hold on to the idea of a “record” because this is how current systems represent metadata to us both on the back- and front-end. However with a bit of abstraction we can see that a library record is essentially nothing but a set of pieces of data.

One problem with the linked data approach though is the issue of provenance which was referred to above as one of the roles the MARC record undertakes (ask OCLC, e.g. http://community.oclc.org/metalogue/archives/2008/11/notes-on-oclcs-updated-record.html). If you take a triple out of its original context or host, how can you tell who created the triple? Is it important? Richard Wallis always makes the point that triples are merely statements: like other web content they are not necessarily true at all. Some uneasiness on the trustworthiness or quality of data turned up at various points during the day. I think it is an interesting issue, not that I know what the answer is, especially when current cataloguing practices largely rely on double checking work that has already been done by other institutions because that work cannot really be trusted. There are other issues and possible solutions that are a little outside my comfort zone at the moment, including excellent buzzwords like named graphs and bounded graphs.

Andy Powell also mentioned, among other things:

  • the “broad semantics” or “fuzzy buckets” of DC which derive in large part from the library catalogue card, where, for instance, “title” or “creator” can mean all sorts of imprecise things;
  • flat world modelling where two records are needed to describe say, a painting and a digital image of the painting. This sounds to me like the kind of thing RDA/FRBR is attempting (awkwardly in my view) to deal with.
  • the use of strings instead of things, such as describing an author as “Shakespeare, William” rather than <http://www.example/authors/data/williamshakespeare>. This mirrors one of the bizarre features of library catalogues where authorities matching is generally done by matching exact strings of characters rather than identifiers. See Karen Coyle for an overview of the headaches involved.

There were three other talks which I don’t propose to go into in much detail. I’ve touched on Richard Wallis’s excellent (and enthusiastic) introduction to the whole idea of linked data and RDF, a version of which I found dangerously intriguing at a previous event given by Talis. He talked about, among other things, the use of the BBC in using linked data to power its wildlife pages (including drawing content from Wikipedia) and World Cup 2010 site; in fact, how linked data is making the BBC look at the whole way it thinks about data.

His other big message for me was to compare the take-up of the web to where the current take of linked data was in order to suggest that we are on the cusp of something big: see above for my discussion of the tipping point.

* I don’t like self-conscious classical plurals where I can help it, not that there’s anything wrong with them as such.
** I can’t help but find this name a little odd, if not actually quite camp. I expect there’s some pun or reference I’m not getting. Anyway. Incidentally, finding information about PoolParty from the punkt. website is not easy, which I find hard to understand given that it is a company wanting to promote its products; and, more specifically, it is a knowledge management (and therefore also information retrieval) company.
*** Partly because I don’t think I could do it justice, partly also because it was the most intellectual talk and took place at the end of the day.