Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

For the first year of the Stanford Linked Data in Production Tracer Bullets project, the focus was on designing and implementing production workflows for cataloging.  Copy Copy cataloging and original description in a linked data editor were both diagrammed in a linked data production environment.   During this process, some major issues to overcome were identified, including how to handle updates of MARC data after initial conversion to RDF; conversion of local, private, and e-resource MARC data; URI minting; whether to use a URI for an entity that represents a real-world object or concept as opposed to an authority record that identifies said object; and reconciliation of identifiers.

To implement the copy cataloging workflow, processes were created to convert a pre-identified set of book format MARC records to BIBFRAME2, load the generated RDF to a triplestore, and index to Stanford’s local implementation of a Blacklight/solr discovery environment.   A BIBFRAME2 -> solr mapping for the book format was created to achieve this, as well as the SPARQL queries to extract the data from the triplestore.

RDF/BIBFRAME2 -> Stanford solr mapping: https://docs.google.com/document/d/1Dlnfxey_gdhBq8Dnef8kyRw8mMpIE95r5AC2APi0Dgs/edit?usp=sharing

Workflow diagram for copy cataloging: TB1a_Flow_April_2017.pdf

Stanford also reviewed several linked data tools, often installing a local instance.   A registry of these was created to We created a Registry of Tools to increase public knowledge of what is available.https://wiki.duraspace.org/display/LD4P/Registry+of+Tools