This documentation covers the latest release of VIVO, version 1.10.x.
If you are able to help contribute to this documentation, please contact
sysadmin at duraspace dot org
Looking for another version? See all documentation.
VIVO, as delivered, is not a high availability application. Single points of failure in the application are addressed below. Some of these can be improved by approaches to deployment as noted. Others would require additional development to provide high availability deployment options.
VIVO code makes use of HttpSession objects. Sessions can be replicated in Tomcat and/or sticky routing to the servers.
VIVO does limited caching. VIVO caches some information in the visualisation stack. This is not critical to the operation of VIVO, as application servers can each build their own cache. Sticky routing, so that people get consistent graphs in a single session may be sufficient, even if each server could vary slightly in what is displayed.
Every server must use a single Solr cluster, rather than relying on Solr being installed alongside VIVO. Any changes being written to the index would then be shared by all instances. A shared cluster also takes care of the file system storage of Solr, which is currently maintained in the VIVO home directory.
VIVO uses static configuration information, the config and rdf directories, and runtime.properties. These need to be consistent across multiple servers. That could be achieved via a shared home directory, or just multiple identical deployments.
There are three additional areas in the home directory that are of concern. The configuration triple store (tdbModels) is addressed below. Solr indexes are addressed above. The upload directory stores thumbnails for people, etc. If you allowing real-time upload of photos, this directory needs to be on a shared HA filesystem. If you are only batch ingesting thumbnails from external sources, then syncing the directory across servers could suffice. If you are simply linking to externally hosted images, the uploads folder will not be a concern.
By default, this is SDB, stored in MySQL. An HA MySQL configuration should permit multiple application servers to access the same MySQL server cluster.
The configuration triple store is TDB, stored in the tdbModels folder in the home directory. TDB requires that you only have one JVM accessing a TDB triple store. Replication is not possible while the TDB files are open. There are two potential solutions. Through disciplined system administration you may find that the material in the configuration triple store can be considered static. The triple store can then be replicated across each server using a copy. A second approach would involve storing the configuration triple store using SDB in an HA MySQL cluster. This would involve recoding relevant parts of the Vitro application, which appears to be feasible.