These files were produced during the LD4L project. They are all available on the server for

RDF Files

Converter output

The MARC records from each library were converted to BIBFRAME 1.0 RDF by the Library of Congress mar2bibframe converter. LD4L's bib2lod converter was then used to produce RDF in the LD4L data model. The result is RDF in the N-Triples format.

These dumps are available:

Usage data

StackScore usage data is available for the Cornell and Harvard holdings. The scores appear as annotations on the individual bib_ids. Each file contains the usage data for the corresponding, similarly named file of converter output. Data is in N-Triples format.

These data files are available:

Additional triples

Additional triples were created to supplement the converter output, adding Work IDs to the Works, and creating links across institutions, between corresponding Works and Instances.

A concordance file was created, associating all known OCLC numbers with their corresponding Work IDs. This file was made with data extracted from a recent Research snapshot of WorldCat, and is structured as follows:

Fields are tab-delimited. For example:

100000569	100000569	49300684
100000668	100000668	83546218
100000767	100000767	83546282

Using this concordance file, each work was assigned a Work ID, based on the OCLC number of its instances. For example:

        <> .

Although the data from the three institutions were stored in three separate triple-stores, owl:sameAs statements were created where possible to link matching works or matching instances in the separate collections.

Instances with matching OCLC identifiers were linked with owl:sameAs, as were Works with matching Work IDs.

These files are available:

Linked data blobs

The linked data at is served by a Sinatra application, reading from a MySQL database. The database looks like this:

mysql> use ld4l;
Database changed

mysql> show tables;
| Tables_in_ld4l |
| lod            |

mysql> describe lod;
| Field | Type         | Null | Key | Default | Extra |
| uri   | varchar(200) | NO   | PRI | NULL    |       |
| rdf   | mediumblob   | NO   |     | NULL    |       |


'uri' corresponds to the uri of the requested linked data:

mysql> select uri from lod where uri like 'http%' limit 5;
| uri                                                                 |
|                                              |
|                                       |
| |
| |
| |


'rdf' is the data that will be served, in Turtle format, zipped. As such, it is not readable until unzipped:

mysql> select substring(rdf, 1, 70) from lod where uri = "";
| substring(rdf, 1, 70)                                                  |
?N$!ϥsM?G?? |

These dumps are available:

Solr index capture

The application at is built on Blacklight. Blacklight is a Rails app that includes a Solr search engine.

2017-09: The search service demonstration at is no longer being maintained. For information about current work see 

The structure of the search index is determined both by the Solr schema and the Blacklight catalog controller script

These dumps are available:

Triple-store captures

The triple-stores used were instances of Virtuoso OpenSource 7 (taken from the develop branch). More specifically, the Virtuoso instances were built from this source:

$ git remote -v
origin	git:// (fetch)
origin	git:// (push)

$ git status
On branch develop/7
Your branch is up-to-date with 'origin/develop/7'.
nothing to commit, working directory clean

$ git log -1
commit ea51ed3b81a43250ed2e3cfa77ee6e0116388b4b
Merge: 74a23e7 8ee2cfe
Author: VOS Maintainer 
Date:   Mon Mar 7 13:44:06 2016 +0100
    Merge branch 'develop/6' into develop/7

These dumps capture the data directories of the three triple-stores: