Question:

We're generating definite interest from Communications Officers and Web Administrators across departments for pulling data out of VIVO related to faculty publications, etc. to lower the burden of acquisition of this information in maintaining these external, public-facing websites. I'm aware that other VIVO sites are doing this, Cornell possibly being a leading example. From a practical standpoint, can anyone help us understand what the mechanisms are for doing this and what environments/expertise need to be in place on the part of the web administrators for querying VIVO and returning results that can be repurposed for their use?

Responses

There are several mechanisms available with different degrees of flexibility and maturity, and more options are becoming available on a regular basis as the semantic web and linked open data communities grow.

Most techniques currently rely on having a SPARQL endpoint for VIVO. Cornell has used Sesame for some time now, and UF supports [Fuseki.