Archives

Categories

Importing URLs into a large MARC file with Marcedit

This is a brief documentation of how I used Marcedit to import correct URLs from an Excel spreadsheet into a large file of MARC records. The name of the ebook supplier has been changed to protect the innocent. The values below worked for me on the Excel spreadsheet I used.

Problem. Ebook supplier (EBS) supplies MARC records of generally good quality for a package of 600 ebooks. However, the URLs are inconsistent: there are between one and four in each record; several ebook supplies are represented, not just EBS; many of the DOIs for EBS- the only URLs that are consistent- do not work. We do have an Excel spreadsheet listing OCLC numbers and valid URLs for all titles.

General plan. To delete all the 856 fields in the MARC file and replace them with those from the spreadsheet. To do this, convert the relevant bits of the spreadsheet to a simple MARC file and merge the two using Marcedit.

Delete the URLs from the original file
Load/convert the original file as an .mrk file. Use the Tools>Add/Delete Field option to delete all the 856 fields in the original file.

Convert the spreadsheet to MARC.

  • In Marcedit (version 6), select Export Tab Delimited Text.
  • Choose the spreadsheet for the Source File
  • Choose a filename for the Marc text (.mrk ) file to be created
  • Specify the name of the sheet for an Excel file (e.g. in my case EBS)
  • Choose the delimiter that separates the data (in my case I left this alone as Tab. It worked)
  • Choose options (I left the LDR/008 and character encoding alone as I don’t think they mattered)
  • Next. The data snapshot shows the columns numbered Fields 0 to whatever. I needed columns A (OCLC number) and P (URL), so this meant Fields 0 and 15. The fields to select and how they work is done by using the Settings section to create Arguments. For this, I needed two arguments, one for each field:
  • First Argument (OCLC control number to go into the 001 field): Select = ”Field 0”; Map to = “001” ; Indicators = “\\” ; Term. punctuation = “” ; Constant Data & Repeatable Subfield = “”
  • Add Argument when done
  • Second Argument (URL to go into the 856 field): Select = “Field 15”; Map to = “856$u” ; Indicator =”40” ; Term. punctuation = “” ; Constant Data & Repeatable Subfield = “”
  • Add Argument when done
  • Finish. This disconcertingly takes you back to the previous screen but if you open up the .mrk file in the MarcEditor it should be all done. Each record will look something like this:

=LDR 00000nam 2200000Ia 45e0
=001 123456789
=008 140812s9999\\\\xx\\\\\\\\\\\\000\0\und\d
=856 40$uhttp://ebooks.ebs.com/book/12.345/AB678

Edit the new .mrk
As the OCLC numbers in the original MARC records were in the form “ocn123456789” (rather than simply “123456789”), I needed to do a find for “=001 “ and replace it with “=001 ocn” on the new file, then save it.

Merge

  • From the Tools menu of Marcedit, select Merge Records
  • Choose the .mrk of the original MARC records as the Source File (I don’t know if the .mrc would work too)
  • Choose the newly created .mrk file as the Merge File
  • Choose a filename for the newly merged file to be created
  • Leave Record identifier as 001. If you were searching on the ISBN, presumably the 020 would work but haven’t tried it. Other options are 010, 020, 022, and 035, and MARC21 (?)
  • Next.
  • Select the Merge Selected Field option
  • Next
  • Specify the 856 and move it to the Merge Fields box
  • “Merge Completed”

Done
Ta da!

RLUK/European Library linked data sample

RLUK and the European Library (of which the RLUK is now a member) have just released 17 million records as linked open data. They have released three sets (via Mike Mertens), for which links to the RDF turtle versions are below:

I’ve tried to have a quick look at the last just to get an idea and I’ve isolated what I think is all the data for one book, chosen at random. The whole block of turtle prefixes from the start of the file are included:


@prefix rdaa: <http://rdaregistry.info/Elements/a/> .
@prefix rdac: <http://rdaregistry.info/Elements/c/> .
@prefix rdae: <http://rdaregistry.info/Elements/e/> .
@prefix rdam: <http://rdaregistry.info/Elements/m/> .
@prefix rdaw: <http://rdaregistry.info/Elements/w/> .
@prefix rdau: <http://rdaregistry.info/Elements/u/> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix edm: <http://www.europeana.eu/schemas/edm/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix frbrer: <http://iflastandards.info/ns/fr/frbr/frbrer/> .
@prefix ore: <http://www.openarchives.org/ore/terms/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix wgs84pos: <http://www.w3.org/2003/01/geo/wgs84_pos#> .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> a dcterms:BibliographicResource ;
      rdam:P30004 "local identifier: http://data.copac.ac.uk/iid/65204626" ;
      rdau:P60049 <http://rdvocab.info/termList/RDAContentType/1020> ;
      rdam:P30003 "single unit"^^<http://rdvocab.info/termList/ModeIssue> ;
      rdau:P60520 "Unkown"@en ;
      rdam:P30004 "isbn: 0198750315" ;
      rdam:P30156 "The philosophy of history" ;
      rdau:P60339 "edited by Patrick Gardiner." ;
      rdam:P30157 "Oxford readings in philosophy" ;
      rdau:P60398 _:node18kdvnimbx4386 .

_:node18kdvnimbx4386 a rdac:C10004 ;
      rdaa:P50111 "Patrick L. Gardiner" ;
      rdaa:P50121 "1922" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> rdau:P60073 "1974" ;
      rdau:P60099 <http://id.loc.gov/vocabulary/iso639-2/eng> ;
      rdau:P60163 _:node18kdvnimbx4387 .

_:node18kdvnimbx4387 rdau:P60366 "Oxford University Press" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> rdau:P60444 _:node18kdvnimbx4388 .

_:node18kdvnimbx4388 a rdac:C10005 ;
      rdaa:P50032 "London" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> rdau:P60163 <http://id.loc.gov/vocabulary/countries/uk> ;
      dcterms:subject _:node18kdvnimbx4389 .

_:node18kdvnimbx4389 a frbrer:C1007 ;
      rdfs:label "History, Philosophy." ;
      dcterms:hasPart _:node18kdvnimbx4390 .

_:node18kdvnimbx4390 a frbrer:C1007 ;
      rdfs:label "History" .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000084490807> dcterms:extent "224 p. ;" , "21 cm." ;
      rdau:P60470 "Includes index." ;
      dcterms:description "Bibliography: p. [218]-222." .

Some initial observations:

A short snippet from another book showing a blank node asserted as being the same as a VIAF entity, having a relationship with a work using RDA, and the detailed RDA data elements for the name:


_:node18kdvnimbx245 owl:sameAs <http://viaf.org/viaf/17463572/> .

<http://data.theeuropeanlibrary.org/BibliographicResource/3000087185802> rdau:P60398 _:node18kdvnimbx245 .

_:node18kdvnimbx245 a rdac:C10004 ;
      rdaa:P50111 "Niccolo Pagliarini" ;
      rdaa:P50121 "1717" ;
      rdaa:P50120 "1795" .

The BIBFRAME Work

BIBFRAME has worked on modelling works as Works within the BIBFRAME model, similar to the RDA modelling work, itself modelled on the work on the FRBR model of Works and Expressions. A BIBFRAME Work is a creative work, perhaps a FRBR Work, or an RDA FRBR Work but it also expresses a FRBR Expression, and of course an RDA FRBR Expression. A Work may express another Work based on others’ work, not just a FRBR Work or an RDA Work. That also works. FRBR Works or RDA Works expressed as BIBFRAME Works can relate to FRBR Expressions (BIBFRAME Works or RDA Expressions). So, Works are works that can be Works but also Expressions linked to Works that really are Works.

MRV MARC Record Viewer

I have finally completed a multiple record MARC Record Viewer. This has been rather long in the making but is essentially a quick and practical tool for looking at and assessing MARC records without having to load them into specialist software like MARCEdit or an LMS. It is essentially the same as the viewer built for my Codecademy project except that:

  • It reads multiple records in one file, rather than just one, and provides a count.
  • It has an input box so the records don’t have to be hard-coded into the script.

Some example .mrc records of varying lengths can be found here.

It is written in client-side Javascript, so you can view source and see how it works, copy it, and do what you like with it (although I would love to know if you do so). I quite defiantly haven’t used JQuery for this, which would probably have made the whole thing a bit easier; instead it uses proper old skool DOM scripting. It uses a minimal amount of CSS, in two files: a generic one, and one that roughly mimics how MARC records look in an Aleph editing screen. It should be fairly trivial to change this file to suit other purposes.
Thank you to those who have already have a shufti at earlier versions of this, especially on different browsers, and provided feedback! Please do let me know if you have any comments on this, suggestions for improvements, or if you come across errors. I have some ideas for improvements, mainly for making user input easier, and offering different formatting of results. I hope to start using JQuery for these too, and perhaps a later conversion of the whole thing would be in order.

One record in lots of data formats

For a Dev8d session I did with Owen Stephens in February I presented data for a single book and followed how it had changed as standards changed, trying above to explain to non-cataloguers why catalogue records look and work the way they do. At least one person found it useful. I am now drafting an internal session at work on the future of cataloguing and am planning to take a similar approach to briefly explain how we got to AARC2 and MARC21, and where we are heading. I took the example I used at Dev8d and hand-crafted some RDA examples, obtained a raw .mrc MARC21 file, and used the RDF from Worldcat to come up with a linked data example.

I have tried to avoid notes on the examples themselves. However, do note the following: the examples only generally use the same simple set of data elements, basically the bits you might find on a basic catalogue card: no subjects, few notes, etc.; the book is quite old so there is no ISBN anyway. The original index card is from our digitised card catalogue. The linked data example was compiled by copying the RDFa from the Worldcat page for the book; this was then put into this RDFa viewer (suggested by Manu Sporny) to extract the raw RDF/Turtle; I manually hacked this further to replace full URIs with prefixes as much as possible in an attempt to make it more readable (I suspect this is where some errors may have crept in). The example itself is of course a conversion from an AARC2/MARC21 record. C.M. Berners-Lee is Tim’s dad.

Feel free to use this and to point out mistakes. I would particularly welcome anyone spotting anything amiss in the RDA and linked data, where I am sure I have mangled the punctuation in both.

Harvard Citation

Berners-Lee, C.M. (ed.) 1965, Models For Decision: a Conference under the Auspices of the United Kingdom Automation Council Organised by the British Computer Society and the Operational Research Society, English Universities Press, London.

Pre-AACR2 on Index Card

BERNERS-LEE, C.M., [ed.].

Models for decision; a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society.

London, 1965.

x, 149p. illus. 22cm.

AACR2 on Index Card

Models for decision : a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society / edited by C.M. Berners-Lee. -- London : English Universities Press, 1965.

x, 149 p. : ill. ; 23 cm.

Includes bibliographical references.

-       Berners-Lee, C. M.

AACR2 in MARC21 (raw .mrc)

00788nam a2200181 a 4500001002700000005001700027008004100044024001500085245021000100260004900310300003200359504004100391650003300432700002300465710003900488710003000527710004900557_UCL01000000000000000477125_20061112120300.0_850710s1965    enka     b    000 0 eng  _8 _ax280050495_00_aModels for decision :_ba conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society /_cedited by C.M. Berners-Lee._  _aLondon :_bEnglish Universities Press,_c1965._  _ax, 149 p. :_bill. ;_c23 cm._  _aIncludes bibliographical references._ 0_aDecision making_vCongresses._1 _aBerners-Lee, C. M._2 _aUnited Kingdom Automation Council._2 _aBritish Computer Society._2 _aOperational Research Society (Great Britain)__

AACR2 in MARC21

245 00 $a Models for decision :
$b a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society /
$c edited by C.M. Berners-Lee.
260 __ $a London :
$b English Universities Press,
$c 1965.
300 __ $a x, 149 p. :
$b ill. ;
$c 23 cm.
504 __ $a Includes bibliographical references.
700 1_ $a Berners-Lee, C. M.

RDA

Title proper Models for decision
Other title information a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society
Statement of responsibility relating to title proper edited by C.M. Berners-Lee
Place of publication London
Publisher’s name The English Universities Press Limited
Date of publication 1965
Copyright date ©1965
Media type unmediated
Carrier type volume
Extent x, 149 pages
Dimensions 23 cm
Content type text
Illustrative content Illustrations
Supplementary content Includes bibliographical references.
Contributor Berners-Lee, C. M.
Relationship designator editor of compilation

RDA in MARC21

245 00 $a Models for decision :
$b a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society /
$c edited by C.M. Berners-Lee.
264 _1 $a London :
$b The English Universities Press Limited,
$c 1965.
264 _4 $c ©1965
300 __ $a x, 149 pages :
$b illustrations ;
$c 23 cm.
336 __ $a text
$2 rdacontent
337 __ $a unmediated
$2 rdamedia
338 __ $a volume
$2 rdacarrier
504 __ $a Includes bibliographical references.
700 1_ $a Berners-Lee, C. M.,
editor of compilation.

Linked data


@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix schema: <http://schema.org/> .
@prefix worldcat: <http://www.worldcat.org/oclc/> .
@prefix library: <http://purl.org/library/> .
@prefix viaf: <http://viaf.org/viaf/> .
@prefix lc_authorities: <http://id.loc.gov/authorities/names/> .
@prefix mads: <http://www.loc.gov/mads/rdf/v1#> .

worldcat:221944758
  rdf:type schema:Book;
  library:oclcnum "221944758";
  schema:name "Models for decision : a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society";
  library:placeOfPublication _:1;
  schema:publisher _:4 .
  schema:datePublished "[1965]";
  schema:numberOfPages "149";
  schema:contributor viaf:149407214;
  schema:contributor viaf:130073090;
  schema:contributor viaf:137135158;
  schema:contributor viaf:36887201;
_:1
  rdf:type schema:Place;
  schema:name "London :" .
_:4
  rdf:type schema:Organization;
  schema:name "English Universities Press" .
viaf:149407214
  rdf:type schema:Organization;
  madsrdf:isIdentifiedByAuthority lc_authorities:n79056431;
  schema:name "British Computer Society." .
viaf:130073090
  rdf:type schema:Organization;
  madsrdf:isIdentifiedByAuthority lc_authorities:n85076053;
  schema:name "Operational Research Society." .
viaf:137135158
  rdf:type schema:Organization;
  madsrdf:isIdentifiedByAuthority lc_authorities:n79063901;
  schema:name "Institution of Electrical Engineers." .
viaf:36887201
  rdf:type schema:Person;
  schema:name "Berners-Lee, C. M." .

How big is my book: Mashcat session

At Mashcat on 5 July in Cambridge I gave an afternoon session on getting computer readable information from the textual information held in MARC21 300 fields using Javascript and regular expressions. I intended this to be useful for cataloguers who might have done some of Codecademy’s Code Year programme as well as an exploration of how data is entered into catalogue records, its problems, and potential solutions.

AACR2/MARC (and RDA) records store much quantitative information as text, usually as a number followed by units, e.g. “31 cm.” or “xi, 300 p”. This is not easy for computers to deal with. For instance, a computer programme cannot compare two sizes- e.g. “23 cm.” and “25 cm.”- without first extracting a number out of the string (23 and 25) as well as determining the units used (cm). In some cases, units might vary: in AARC2 books below 10 cm. are measured in mm., and non-book materials are often measured in inches (abbreviated to in.). Potential uses for better quantitative data in the 300$c include planning shelving for reclassification and more easily finding books by size or range.

Before the session, I sketched out a possible solution using Javascript and regular expressions to make this conversion for dimensions in the 300$c. I have a put up a version of A script to find the size of an item in mm. based on the 300$c, with the addition of an extra row which you can fill in to test your own examples without having to edit the script.

If you do want to look at how it works or try editing it yourself you can view source, copy all the HTML, then paste it into a text editor. Save it, then open the file using a browser to test it. Refresh the browser when you change the file.

The heart of the script looks like this:

var dollar_c = [
  "9 mm",
  "4 in.",
  "4 3/4 in.",
  "30 cm.",
  "1/2 in.",
  "20 x 40 cm."
];

// Convert text to mm
function text_to_mm (text) {
  // Convert fractions to decimals
  text = text.replace(/(\d*) (\d)\/(\d)/, function(str, p1,p2,p3) {return parseFloat(p1)+p2/p3});
  text = text.replace(/(\d)\/(\d)/, function(str, p1,p2) {return parseFloat(p1/p2)});
  // Extract the size of the book
  size = text.replace (/([\d\.]*).*/, "$1");
  // Extract the units
  units = text.replace(/.*([a-z]{2}).*/g, "$1");
  // Convert from various units to mm
  if (units === "mm") {
    var mm = size;
  }
  if (units === "cm") {
    var mm = size * 10;
  }
  if (units === "in") {
    var mm = size * 25.4;
  }
  mm=Math.floor(mm);
  return mm;
}

It starts with a declaration of an array of examples to be tested: you can alter this with your own if you prefer. text_to_mm is the function that does all the work. It takes in the text from a 300$c, converts fractions (e.g. 4 3/4) to decimals (4.75), finds a number, finds a unit, then performs calculations on the size depending on what the unit is to produce a figure to a standard figure in mm. At Mashcat, Owen Stephens managed to plug an adaptation of this script into Blacklight to create an index of book sizes. Using this he could do things like find the most common sizes or the largest book in a collection.

The main focus of my session, however, was on a similar script to figure out how many actual pages there are in a book, given the contents of a 300$a, e.g. “300 p.”, “ix, 350 p.”, “100 p., [45] leaves of plates”  (a page being one side of a sheet of paper; a leaf being a sheet of paper only printed on one side, so therefore counting as two pages). I have also published a version of A script to find the absolute no. of pages based on the 300$a with the similar addition of a row for easy user testing. Potential uses for recording page numbers rather than pagination include planning shelving space, easier to understand displays for users, and finding books of specified lengths.

The script starts with a similar array of examples to be tested:

// An array of test examples
var dollar_a = [
  "9 p.",
  "9p",
  "30 leaves",
  "30 p., 20 leaves",
  "xiv, 20 p.",
  "20, 30 p.",
  "20, 30, 40 p.",
  "xv, 20, 30, 40 p., 5, 5 leaves of plates",
  "clviii, ii, 4, vi p."
];

The main function is called text_to_pages. The first thing it does is convert any Roman numerals to Arabic ones. The heavy lifting for this is a function by Stephen Levithan which does the actual number conversion. However, we still need to identify and extract the Roman numerals from the pagination in order to convert them. This line does the extraction and makes a list of the Roman numerals:

var roman_texts=text.match(/[ivxlc]*[, ]/g);

The session I gave concentrated on regular expressions (a bit like the wildcards you use on library databases but turned up to eleven) which in all cases here are contained within slashes, and I made a simple introductory guide to regular expressions (.docx). There are many guides to regular expressions on the web too, and useful testers to play with such as this one. The regular expression in the line above can be broken down as follows:

  • [ivxlc] uses square brackets to look for any one of the characters listed within them.
  • The following * means to look for any number of these in a row
  • [, ] any of a comma or a space, again using square brackets. Obviously these characters are not used in Roman numerals but they are a convenient method of isolating these characters as numbers rather, say, the “l” in leaves which would also match otherwise.

The next few lines work through the list, replace any instances of [, ] with “” (i.e. nothing) to leave the bear Roman numerals, convert all the numbers in the list using Stephen Levithan’s functions, then do the replacements on the pagination given in text:

if (roman_texts) {
    for (i=0; i<roman_texts.length; i++) {
      // Remove space
      roman_texts[i]=roman_texts[i].replace(/[, ]/,"");
      var arabic_text = deromanize(roman_texts[i])+" ";
      text = text.replace(roman_texts[i],arabic_text+" ");
    }
  }
}

Like the size script above, the rest of conversion needs to do two things: find the numbers and find the units. To do this we need to find the sequences involved. While this is easy with something like “24 p.” (number is 24, unit is p) or even “xv leaves” (number is 15, unit is leaves), it becomes troublesome when you get something like “23, 100 p.”: the first number is 23 but there is no unit associated with it, only a comma to signify that it is a sequence at all. The following lines try and get round this problem but looking for sequences where the comma appears to be the unit and then looking ahead to find the next unit. In the “23, 100 p.” example the script would keep looking forward past the 100 until it gets to the “p”.

// Convert 20, 30 p. to 20 p. 30 p
  while (text.match(/\d*,/)) {
    text = text.replace(/(\d*),(.*?(p|leaves))/, "$1 $3 $2");
  }

The first regular expression in the while line looks for:

  • \d* any number of digits. \d is any digit and * looks for any number of them, followed by
  • , a comma

So as long as the script finds any sequences of numbers followed by a comma, it will carry on making the replacement underneath it. The replacement line itself looks for

  • \d* any number of digits again, followed by
  • , a comma
  • .*? which is . any character * any number of times. The ? makes sure that the smallest matching group of characters is matched; otherwise the expression will think that the units corresponding to the number 15 in the pagination “15, 25 p., 50 leaves” is “leaves” rather than “p”.
  • p|leaves either p or leaves. The pipe means either match on the left of it or the right of it. Because this is in a set of round brackets, the pipe only applies there, rather than the whole expression.

Brackets also capture subsets which is really useful here: the first set of () brackets captures the number of pages and stores it as $1, the second set captures everything between the comma and the end of the units as $2, the third  set captures the units only, either “p” or “leaves”, and stores it as $3. So in the example “15, 25 p., 50 leaves”, $1 is “15″, $2 is ” 25 p”, and $3 is “p”. The replacement puts these back in a different order, i.e. “$1 $3 $2″ which would be “15 p 25 p”.

Now that all the sequences will be in number-unit pairs, we can get on with making a list of them to work through:

 // Find sequences
  var sequences = text.match(/\d+.*?(,|p|leaves)/g);

This looks for:

  • \d+ at least one digit
  • .*? any number of any characters, although not being greedy
  • (,|p|leaves) any of a comma, “p”, or “leaves”. Obviously, if the while loop above has worked, then the comma isn’t needed, but I’ll confess this is a hangover from a previous version of the script…

The next section goes through each of the sequences found and extracts the number and then the unit:

// go through sequences
  var pages = 0;
  for (var i=0; i<sequences.length; i++) {
    // Extract no
    var number = parseFloat(sequences[i].match(/\d+/g)[0]);
    var units = sequences[i].match(/(p|leaves)/g)[0];
    if (units == "p") {
      pages+=number;
    }
    if (units == "leaves") {
      pages+=number*2;
    }
  }

The regular expression to find the number is straightforward:

  • \d+ at least one digit

The parseFloat converts the digits as a string to a Javascript number. The regular expression to find the unit is also simple:

  • (p|leaves) either “p” or “leaves”

If the units are “p”, then the variable pages is incremented by the value of the number found; if “leaves”, then pages is incremented by twice that number.

The programme should cope with the loss of abbreviations in RDA as “p.” is expanded to “pages” but the regular expression to find the units will still find the “p” at the beginning much as it isn’t put off by the full stop after the “p”. It could be expanded to look for other variations and I will do so if I can:

  • “S.” for German “Seite” or “Seiten”.
  • “leaf”, as in “1 leaf of plates”
  • sequences which start in the middle of larger ones, like journal issues with “xii, p. 546-738″. This one will be the most complicated as it goes against the basic flow of the existing code.

I also haven’t properly tested folded sheets or multiple volume works. Other improvements are needed in failing more gracefully when it doesn’t find what it’s expecting: the programme should really test the existence of the arrays it makes before looping through them, but this would make it harder to understand at a glance or demonstrate on screen so I didn’t do it.

The scripts are written in Javascript for several reasons: it is the language that Codecademy focusses on for beginners; it requires no specialist environment, server, or even a web connection: you just need a basic text editor and a browser; it is easy to adapt for a web page if you do manage to build something; and, it is the language I am most confident working in. It would be fairly easy to port to other languages though, and Owen changed the size script with some other modifications to work in Beanscript/Java in Blacklight.

I can’t speak for the attendees, but I learnt a lot, and much was made more clear, from playing around with these scripts and talking to people at Mashcat:

  • Quite how depended AARC2 and RDA (and consequently MARC21) are on textual information, even for what appears to be quantitative data.
  • That even for what appears to be standard number-unit data, there are too many complications that make it non trivial to extract data:
    • fractions (not even decimals) in 300$c
    • differing units: book sizes in mm. or cm. depending on how big the book is; disc sizes in in.; extent in pages or leaves (or volumes or atlases or sheets…)
    • sequences with implied units, such as those with commas.
  • there is frequently a lack of clarity and ambiguity of what is actually being measured:
    • for books the dimension recorded is normally height (although this is not explicit from a user’s point of view,  sometimes it’s height and width, and for a folded sheet it could be all sorts of things); for a disc it’s the diameter.
    • For the 300$a what’s being recorded is pagination, something entirely different from number of pages. Although important for things like rare books, how important is complete pagination for most users compared to a robust idea of how large a book is? Amazon provide a number of pages. More importantly, how understandable is pagination? During my demonstration, some of my audience of librarians were left cold by the meanings of square brackets for example (and square brackets can mean any number of things depending on context). Perhaps there is room for both.

I suppose this latter point is a potential conclusion. Ed Chamberlain asked me what I thought should be done. I don’t know to be honest. I think, like much of the catalogue record, lots more research is needed to see what users (both human and computer) actually want or need. It should be said that entering pagination is in many ways easier for the cataloguer. However, I do think we need:

  • quantitative data entered as numbers with clear and standard units. For instance, record all book heights as mm. and convert to cm. for display if needed.
  • more data elements to properly make clear what is being recorded. Instead of a generic dimension, we need height, width, depth?, diameter, etc. Instead of pagination, we could have separate elements for pagination, number of pages, and number of volumes (50 volumes each of 10 pages is not the same as 4 volumes of 1000 pages each). Obviously all of them wouldn’t be needed for all items.

The research to enable us to choose what to record, why we’re recording it, and for whose benefit would be the best starting point for this as well as many other questions in cataloguing and metadata.

Cataloguing coding

I am not a trained programmer, coding is not part of my job description, and I have little direct access to cataloguing and metadata databases at work outside of normal catalogue editing and talking to the systems team, but I thought it might be worth making the point of how useful programming can be in all sorts of little ways. Of course, the most useful way is in gaining an awareness of how computers work, appreciating why some things might be more tricky than others for the systems team to implement, seeing why MARC21 is a bastard to do anything with even if editing it in a cataloguing module is not really that bad, and how the new world of FRDABRDF is going to be glued together. However, some more practical examples that I managed to cobble together include:

  • Customizing Classification Web with Greasemonkey. This is a couple of short scripts using Javascript, which is what the default Codeacademy lessons use. Javascript is designed for browers and is a good one to start with as you can do something powerful very quickly with a short script or even a couple of lines (think of all the 90s image rollovers). It’s also easy to have a go if you don’t have your own server, or even if you’re confined to your own PC.
  • Aleph-formatted country and language codes. I wrote a small PHP script to read the XML files for the MARC21 language and country codes and convert them into an up to date list of preferred codes in a format that Aleph can read, basically a text file which needs line breaks and spaces in the right places. It is easy to tweak or run again in the event of any minor changes. I don’t have this publicly available anywhere though. PHP is not the most elegant language but is relatively easy to dip into if you ever want to go beyond Javascript and do more fancy things, although it can be harder to get access to a server running PHP.
  • MARC21 .mrc file viewer. I occasionally need to quickly look at raw .mrc files to assess their quality and to figure out what batch changes we want to make before importing them into our catalogue. This is an attempt to create something that I could copy and paste snippets of .mrc files into for a quick look. It is written in PHP and is still under construction. There are other better tools for doing much the same thing to be honest, but coding this myself has had the advantages of forcing me to see how a MARC21 file is put together and realising how fiddly it can be. Try this with an .mrc which has some large 520 or 505 fields in it (there are some zipped ones here, to pick at random) and watch the indicators mysteriously degrade thereafter. I will get to the bottom of this…

The following examples are less useful for my own practical purposes but have been invaluable for learning about metadata and cataloguing, in particular, RDF/linked data. I was very interested in LD when I first heard about it. Being able to actually try something out with it (even if the results are not mind-blowing) rather than just read about it, has been very useful. Both are written in PHP and further details are available from the links:

Nothing to do with cataloguing, but what I am most proud of is this, written in Javascript: Cowthello. Let me know if you beat it.

Update: Shana McDanold also wrote an excellent post on why a cataloguer should learn to code with lots of practical examples.

Lodopac example searches

Yay, my entry for the Discovery & DevCSI Developers Competition- Lodopac- was awarded a commendation for its use of the Cambridge University Library (CUL) dataset. During the judging I was asked for searches which were known to work well- the timeout issues I discussed under Limitations being not insignificant, especially with author or title searches. I submitted a version of the following brief general notes which I hope are helpful to anyone else who wants to play:

The British National Bibliography (BNB) server is generally more responsive than the Cambridge University Library one; title seems to work better than author. The following are hopefully useful examples useful:

I would really like to try and think of ways of improving free text regular expression search times for things like author and title in Sparql* although I doubt there is one that doesn’t rely on the configuration, processing power, or indexing of the server being searched.

* thinking aloud, some ideas might include: downloading a larger imprecise set for further local searching (e.g. for an author/title search downloading the title matches and searching the authors locally: although this would also be slow, it would get round the timeout at least); forcing a look-up in a controlled vocab first in order to get an exact string match (esp for authors, although even if this is possible, this forces a user to do more work, which isn’t the point);  local indexing of the triple store (this is probably the best way but I’m not sure how to go about it, whether I really have the server capabilities to do it, and can be committed to the updating required).