Theme: Technology |
|
Data integration and standardization across multiple systems - Cerif, ORCID, SciENcv, Profiles, LOKI, etc. |
|
Quality ontology/standards management and development for a better integration into the software stack (or for other applications as well). |
|
Providing input for use case driven evolution of the VIVO-ISF data standard. |
|
Maintaining the technical and feature advantage of VIVO over other platforms, particularly ones deployed outside of institutions, eg. academia.edu, researchgate, impact story, figshare and the like. All of these external platforms have compelling value propositions for individual academics and may impact on take up of VIVO at an institutional level. Effective integration with all of these external platforms may act to reduce the challenge they represent. |
|
Better integration with content management systems from fedora (data management app development frameworks) to drupal (standard off the shelf content management) will prepare VIVO for a future of facilitating direct access to research data and related resources. |
|
Finalising a solid draft of the VIVO-ISF ontology, developing mappings to other comon related otonlogies and driving the adoption of VIVO-ISF in other products will be instrumental in growing the develper base, tools and utilities around VIVO. Keeping third party sofware vendors along for the transition to this ontology (eg. Symplectic) will be key. Bringing other vendors to the table, eg. Elsevier Pure, etc will ensure that VIVO is a standard choice, rather than a difficult one for institutions looking to deploy a research profiling system. Future ontlology development will require better tool for migrating data and ingest pathways from one iteration of the ontology to another. Ensuring that existing VIVO installs have the documentation and support required to migrate to up-to-date versions of the platfrom/ontology will ensure that the landscape within the VIVO community does not become too fragmented. |
|
Releasing VIVO Search or supporting a similar solution like DIRECT2Experts. |
|
A key strength of the technology is the distribution (federation) of data ... this needs more attention in the beginning with stronger and more compelling demonstrations of that strength (this is particularly a problem when you don't know who is running VIVO) |
|
Progress on VIVO search |
|
To develop a VIVO search tool, to expand our capabilities to support new VIVO implementations and to add/improve features like CVs, visualizations, embedded content and other cool "gadgets" |
|
Getting VIVO search up and running |
|
More complete and accessible documentation |
|
Researcher personalisation: allowing some look and feel choice (citation style etc) |
|
SEO (how to get as much traffic as an academia.edu/ research gate ) |
|
Humanities and social sciences (HSS) ontology development |
|
Grants management enhancements |
|
Increasing the discoverability of and display of information presented via VIVO, eg. search engine optimisation by enhancement of templates with mappings of information to schema.org (for search engines) and markup for social sharing on facebook and twitter. Decreasing the "bounce-rate" and short duration of visits to VIVO sites by automating the production of "sticky" content that users will engage with. (Better linking between entities in vivo sites, not just as a product of direct relations, but for "similar" content etc) |
|
Core – continue to improve modularity, with plug-and-play triple-stores and reasoners. |
|
Core – lower the bar to entry with a binary distribution; no database, no Ant, no Tomcat, no Vagrant, just a Java runtime required. |
|
A more modular software structure where user developed functionality could more easily be plugged into the software build |
|
As part of the challenge outlined above - a specific set of tasks exist around modularising the codebase to allow for small targeted contributions of code and to facilitate the deployment and configuration of 3rd party modules that enhance base vivo functionality. Aligned with these activities is the facilitation of activites to allow VIVO to be deployed within a wide variety of software stack eg. use of alternate triple stores etc. (ie. more of a loosely coupled architecture) without harming the "easy to get started" experience for less technical users and institutions. |
|
More data out. The community needs to give more effort into producing ( or facilitating the production of ) secondary data sets that are useable by administrative folks (aka donors, funders) and academics alike. Dave Eichmann at IOWA is doing a great job of this using harvested VIVO data. |
|
Achieving performance optimisations |
|
Make an easy to install, default configuration, application- reduce flexibility and increase simplicity |
|
Ingest – select (or develop) an entry-level ingest tool and thoroughly support its use with tutorials, examples, and workshops. |
|
Encouraging the release and development of community developed tools and software to support VIVO implementations. |
|
Make it easier for new institutions to implement VIVO |
|
Improve the value proposition for VIVO by reducing the effort required to create and maintain a VIVO. |
|
No technical project manager tying things together and helping prioritize work, dependencies, etc. |
|
Enforce standard code practices |
|
Move the whole VIVO community onto VIVO 1.7 (including SciVal and Research Profiles) : reduce technical debt. |
|
Ontology developers must become user-oriented, and with an eye toward practical performance levels. |
|
Growth of Technical support community |
|
Determining the community model for software development. Will VIVO continue to be primarily a single web application that aims to serve nearly all implementation needs (data display, editing, search, etc) or will the VIVO community become a group of solutions that can be deployed individually to solve related problems (like the Duraspace affiliated Hydra project). T, features, strategy, arch, doc |
|
Increased clarity on the Vitro project, i.e. will it be promoted to other user bases or remain focused as a tool for the current faculty/researcher user base? Should it be renamed to amplify the VIVO brand, e.g. VIVO Core? |
|
Is Vitro something the community wants to push forward as a solution? There is great potential value in Vitro as a platform to develop Linked Open Data solutions, which there is a a great deal of interest in related communities. Should this become more of a prominent offering? How does it fit in? |
|
Reduce the barriers to entry to enable and demo the technology (this will probably come in the form of better demos and expanded coverage of the ingest use cases) |
|
Complexity of implementation |
|
Demonstrate more compelling integration use cases -- most institutions already have some form of the data being ingested, but it is not so clear how to quickly show the value add of VIVO with existing systems that may already have data but lack an understanding of its value in the VIVO context |
|
Streamlining data ingest, improving user interface design/usability, cross-instance information sharing (e.g. cross-organization exchange of information about the same person, project, or topic) |
|
How do we radically reduce the effort required to implement and maintain VIVO? |
|
Best supporting backwards compatibility and data migration for older VIVO implementations as the VIVO-ISF ontology evolves. |
|
Ensure the current VIVO user base upgrade to the recent version |
|
Ensure future upgrades are less prone to problems |
|
It seems like there are significant problems upgrading to some new releases of VIVO. How do we deal with this in the short-term for current adopters and longer-term for future adopters? |
|
The original Indiana visualization tools are not being maintained. What do we do about that? |
|
Addressing performance issues in the application and its software dependencies. |
|
Identifying or providing more linked open data sources for commonly referenced individuals such as institutions, journals, and concepts (ex. MeSH). |
|
Better improvements in key functions such as templating i.e which template renders what and gets included where? |
|
Going beyond biomedical ontologies and taxonomies to be useful to everyone, no matter what their discipline |
|
How do we identify the killer apps for VIVO? |
|
Leveraging internal semantic data to expose data in form best suited for consumption and syndication by external systems (schema.org, google scholar, google news, facebook open graph, json-ld, etc) |