LD4P and LD4L-Labs hosted a Community Input Meeting at Stanford University to

Thank you to all the participants for their energy and contributions.

Below is an executive summary of the meeting. For more detail, see Community Input Meeting Agenda, Presentations, and Meeting Notes.


Ontology Topic Area takeaways
Themes
  • No perfect ontology. There is no perfect ontology for all situations; the focus should be on meeting known use cases and ensuring models are useable for internal needs and external consumption by communities with shared goals and needs.

  • Crosswalks reduce the need to pick the right ontology. Rather than strive for picking/creating the perfect model, or trying to reach complete consensus within the community, focus on creating consistent data according to useable models and shared application profiles that can be mapped/crosswalked at the point of need.

  • Room for convergence. While there is no perfect model, currently the number of choices for picking a base bibliographic ontology is complicated, perhaps unnecessarily complicated. Through an evaluation of the goals and patterns of existing bibliographic ontologies we can identify opportunities for convergence.

Unanswered Questions
  • Is there the will among ontology maintainers to converge duplicative bibliographic ontologies? If so, how is this best achieved? For the remaining ontologies, is there a place for formal alignments between models?

  • To truly understand implementations of linked data, how can the community best document and share application profiles?

  • Where formal ontology alignment can’t be achieved, what mappings can the community produce to better facilitate consuming of data in multiple models?

Next Steps
  • Push for ontology convergence and alignment discussions.

  • Push for clear, publicly available application profiles for the datasets we wish to consume.

  • Through experience, at the point of need, develop actionable mappings between datasets deemed desirable to consume within the community.

Workflows Topic Area takeaways
Themes
  • Data quality in a community workflow model. Vocabulary and URI choice, level of reconciliation, and provenance are factors that will determine data quality. Provenance at the “atomic” level may be necessary to select statements you trust. Authoritative data will be based on trust as well. The ability to discover URIs for previously established entities and their versioning as the data is updated will be high priorities.

  • Continuing use of MARC. Our new world will be a mix of MARC and RDF. Conversion of MARC data will be necessary for the foreseeable future. We’ll need MARC to share data with those haven’t transitioned to RDF for discovery. MARC will be retained in our ILS systems to perform functions such as circulation.

  • Using existing RDF metadata and data sources. In an RDF environment, copy cataloging may be as simple as asserting that you own an instance of a resource. We will need to specify which subgraph we’re making assertions about. Metadata application profiles may help shape our views of the data.

  • The cataloger and the discovery user experience. Making a significant change in the form and function of the data model used by libraries and other cultural heritage organizations calls into question the basic assumptions about what the workflows around data creation and use should be. We need to be mindful about simply re-creating our current workflows using a new data model when the new model can support a much more robust experience.

Unanswered Questions
  • How will we reconcile our entities at scale?

  • How do we share and update data in an open, communal environment, especially in a mixed MARC/RDF world?

  • Does trust equal provenance?

  • How do we maintain data over time?

  • How do we create new tools to support and leverage a new data model?

Next Steps
  • Push for community-driven reconciliation.

  • Further refine conversion of MARC to RDF and RDF to MARC, especially for entity generation (e.g., works).

  • Work with trusted community members to define concepts such as “copy-cataloging.”

  • Create opportunities to imagine “blue sky” and “green field” workflows.

Tools Topic Area takeaways
Themes
  • Time to grow up. The library and cultural heritage community is in its infancy when it comes to linked data tools. The community needs to develop better awareness and understanding of existing tools. Better communication among tool creators and users is essential for developing the right tools.

  • A new toolbox. Useful linked data tools will be modular, and will operate in a record-less metadata environment.

  • If you build it... Tools pave the way to adoption of linked data in libraries and cultural heritage organizations, from the practical problem of creating linked data to the broader challenge of proving that linked data is “worth it.”

Unanswered Questions
  • Who pays for collaborative tool building?

  • How will tools handle non-static, evolving metadata?

  • What will a recordless environment look like and what language will we use to describe this environment?

Next Steps
  • Build community tool registry.

  • Create collaborative tool development opportunities.

  • Create opportunities to imagine and describe an ideal linked data cooperative cataloging experience and the tools to support it.

Community Adoption/Engagement Topic Area takeaways
Themes
  • Critical mass achieved. Linked data activity in libraries and cultural heritage organizations has critical mass; there’s desire and need for a community to take a more defined shape to gather interested parties, share development and ownership of standards, provide training in fundamentals, and enable adoption.

  • Focus on interoperability and shared interests. With a wide variety of organizations, patrons and materials, implementation of linked data will take many forms, but there are common concerns the community can tackle together, including reconciliation, identity management, and provenance. Multiple models will co-exist to describe the rich collections of libraries and cultural heritage organizations; creating interactions among the models and providing implementation guidance are critical.

  • Get concrete. The community is ready to get its collective hands dirty with tool demos, sandboxes for creating and exploring linked data, examples and how-to’s.

Unanswered Questions
  • How do libraries and cultural heritage organizations show that “linked data is better”?

  • What new business models will enable commercial vendors to embrace linked data?

Next Steps
  • Launch community-based working groups on shared interests, including reconciliation and ontology convergence.

  • Begin building a comprehensive community structure to support engagement with and adoption of linked data.

  • Specify and model best practices for transparent community engagement.