You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

  • Data cleanup in VIVO

Introduction

You've looked at VIVO, you've seen VIVO in action at other universities or organizations, you've downloaded and installed the code.  What next? How do you get information about your institution into your VIVO?

The answer may be different everywhere – it depends on a number of factors.

  • How big is your organization? Some smaller ones have implemented VIVO only through interactive editing – they enter every person, publication, organizational unit, grant, and event they wish to show up, and the keep up with changes "manually" as well. This approach works well for organizations with under 100 people or so, especially if you have staff or student employees who are good at data entry and enjoy learning more about the people and the research.  There's something of an inverse correlation with age – students can be blazingly fast with data entry, employing multiple windows and copying and pasting content.  The site takes shape before your eyes and it's easy to measure progress and, after a bit of practice, predict how long the process will take.
    • This approach may also be a good way to develop a working prototype with local data to use in making your case for a full-scale effort.  The process of data entry is tedious but a very good way to learn the structure inherent in VIVO.
    • We recommend that people new to RDF and ontologies enter representative sample data by hand and then export it in one of the more readable RDF formats such as n3, n-triples, or turtle.  This is an excellent way to compare what you see on the screen with the data VIVO will actually produce – and when you know your target, it's easier to decide how best to develop a more automated ingest process.
  • The interactive approach will obviously not work with big institutions or where staff time or a ready pool of student editors is not available.  There are also many advantages to developing more automated means of ingest and updating, including data consistency and the ability to replace data quickly and on a predictable timetable.
  • What are your available data sources?  Some organizations have made good institutional data a priority, and others struggle with legacy systems lacking consistent identifiers or common definitions for important categorizations such as distinct types of units or employment positions. You may have to do make some inquiries to find the right people to contact to find out what data are available, and the stakeholders on your VIVO project may need to request access to that data.

Next – what is different about data in VIVO?

As we've described, it's well worth learning the VIVO editing environment and creating sample data even if you know you will require an automated approach to data ingest and update.

VIVO makes certain assumptions about data based largely on the types of data, relationships, and attributes described in the VIVO ontology.  These assumptions do not always follow traditional row and column data models, primarily because the application almost always allows for arbitrarily repeating values rather than holding strictly to a fixed number of values per record.  Publications may most frequently have fewer than five authors, but in some fields such as experimental physics it's common to see hundreds of authors – not very workable in a one-row-per-publication, one-column-per-author spreadsheet model.

In VIVO, data about people, organizations, events, courses, places, dates, grants, and everything else are stored in one very simple, three-part structure – the RDF statement.  A statement, or triple, has a subject (any entity), a predicate or property, and an object that can be either another related entity or a simple data value such as a number, text string, or date.  While users will see VIVO data expressed in larger aggregations as web pages, internally VIVO is storing its data as RDF statements or triples.  

This is not the place to explain everything about RDF – there are many good tutorials available and other sections of this wiki explain the VIVO ontology and the more technical aspects of RDF. For now, just bear in mind that while the data you receive may come to you in one format, much of the work of data ingest involves decomposing that data into simple statements that will then be re-assembled by the VIVO application, guided by the ontology, into a coherent web page or a packet of Linked Open Data.

What data can VIVO accept?

With VIVO, your destination will be RDF but you may receive the data in a variety of formats. A first stage in planning ingest involves analyzing what data you have access to and mapping on paper how it need to be transformed for VIVO.

It's probably most common for data to be provided in spreadsheet format, which can be very simple to transform into RDF if each column of every row refers to attributes of the same entity, usually identified by a record identifier. The process becomes more complicated if different cells in the same row of the spreadsheet refer to different entities.

The following spreadsheet would be very easy to load into a VIVO describing cartoon characters:

idnameheightage
1Goofy89 cm11
2Elmer Fudd60 cm45
3Roadrunner140 cm2

You can readily imagine a storing the information about each cartoon character – id, name, height, and age – in one entity for each character.

A spreadsheet of books, however, would be more complicated:

idtitlepublication dateauthorpublisherpages
497531Cartoon Animation1967Wilcox, GeorgeHB Press237
501378Animation Techniques1989Smith, CharlotteCinema Press359
391783Digital Animation2005Ivar, SamuelDigital Logic, Inc.327
34682Dairy Barn Automation2011Wilcox, G.P.University of Minnesota Press403

VIVO stores the book, each author, and the publisher as independent entities related to the other.  This enables information about the book, authors, and publisher to be queried and displayed independently, a key feature of the semantic data model.

This example also points out another challenge in working with data – it's not always clear when values that appear similar actually represent the same entity, whether a person, organization, title, journal, or event.  It would be easy to assume the George Wilcox in the first entry is the same as G.P. Wilcox in the 4th, but they are writing about very different topics. For a small organization, it may be easy to disambiguate authors, but this becomes a major challenge at the scale of a major research university.

Data cleanup and disambiguation are challenges for any system and will be a common theme in documenting VIVO data ingest along with semantic data modeling that is more specific to working with VIVO.

Further information

 

  • No labels