We suggest that anyone heavily involved with implementing the project be familiar with these terms.   Click here for the full VIVO glossary.

Duraspace - The DuraSpace organization has sustained and improved open technologies that are tested and durable since 2009. Working with global communities of practice DuraSpace is actively involved in projects that use DuraSpace technologies for access, management, and preservation of digital content.DuraSpace collaborates with open source software projects, academics, technologists, curators and related commercial partners to create innovative, interoperable technologies and open standards and protocols that share an interest in preserving digital scholarship and culture. source

Open-source -Why would anyone want to give away the software program that they have sweated blood and tears over? And how do they give it away? Moreover, what happens after the software has been released to all and sundry? Who looks after it and produces new and improved versions? To answer these questions we must consider open source as a software development methodology, and in the context of community building.  Open source is developed by a number of people who may have no connection to one another apart from their interest in the open source project. Consequently, the software development methodologies adopted are not the same as those found in closed source development projects.  Since open source is developed by a group of individuals with a shared interest in the project this community of users and programmers is key to the advancement of any open source project. source

OWL - The Web Ontology Language (OWL) is a semantic markup language for publishing and sharing ontologies on the World Wide Web. Where earlier knowledge representation languages have been used to develop tools and ontologies for specific user communities (particularly in the sciences and in company-specific e-commerce applications), they were not defined to be compatible with the architecture of the World Wide Web in general, and the Semantic Web in particular. source

Protégé -  a free open source ontology editor developed by Stanford Center for Biomedical Informatics Research at the Stanford University School of Medicine. WebProtégé provides the following features:

  • Support for editing OWL 2 ontologies
  • A default simple editing interface, which provides access to commonly used OWL constructs
  • Full change tracking and revision history
  • Collaboration tools such as, sharing and permissions, threaded notes and discussions, watches and email notifications
  • Customizable user interface
  • Customizable Web forms for application/domain specific editing
  • Support for editing OBO ontologies
  • Multiple formats for upload and download of ontologies (supported formats: RDF/XML, Turtle, OWL/XML, OBO, and others)
    source

Resource Description Framework (RDF) --The RDF language is a part of the W3C's Semantic Web Activity. W3C's "Semantic Web Vision" is a future where web information has exact meaning, can be understood and processed by computers, and computers can integrate information from the web.  RDF was designed to provide a common way to describe information so it can be read and understood by computer applications. source

Semantic Reasoner - A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description language.  A reasoner is a key component for working with OWL ontologies. In fact, virtually all querying of an OWL ontology (and its imports closure) should be done using a reasoner. This is because knowledge in an ontology might not be explicit and a reasoner is required to deduce implicit knowledge so that the correct query results are obtained. source1 source2

Semantic Web -- The Semantic Web, Web 3.0, the Linked Data Web, the Web of Data…whatever you call it, the Semantic Web represents the next major evolution in connecting information. It enables data to be linked from a source to any other source and to be understood by computers so that they can perform increasingly sophisticated tasks on our behalf. source

Solr -- Apache Solr is an open source search platform built upon a Java library called Lucene.  Solr is a popular search platform for Web sites because it can index and search multiple sites and return recommendations for related content based on the search query’s taxonomy. Solr is also a popular search platform for enterprise search because it can be used to index and search documents. source

SPARQL -- SPARQL (pronounced "sparkle", a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language, that is, a semantic query language for databases, able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 became an official W3C Recommendation, and SPARQL 1.1 in March, 2013. source

Triple - A Triple is the minimal amount of information expressible in Semantic Web. It is composed of 3 elements: 1) A subject which is a URI (e.g., a "web address") that represents something. 2) A predicate which is another URI that represents a certain property of the subject. 3) An object which can be a URI or a literal (a string) that is related to the subject through the predicate. source

Triplestore - Triplestores are Database Management Systems (DBMS) for data modeled using RDF. Unlike Relational Database Management Systems (RDBMS), which store data in relations (or tables) and are queried using SQL, triplestores store RDF triples and are queried using SPARQL.  A key feature of many triplestores is the ability to do inference.  The location for storing VIVO data. The default VIVO installation calls for a MySQL database to hold the information in VIVO but there are other alternative storage options both established and under exploration. source

 

  • No labels