Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Brown – (Ted) – Still cleaning up data that has been loaded; met with people who maintain the local faculty information system and will be getting current faculty positions and titles from that.
    • Also created new local object properties to display service to the university and other types of service in line with how they typically appear on faculty CVs – looking at how to display date ranges using custom list views
  • Buffalo – (Mark) Has VIVO installed on a new production server and configuring mod-jk
  • Colorado – (Stephen) Working on modifying some of the list views when faculty have multiple appointments in the same department; a sparql query that works via the interactive query interface throws a java null pointer exception
    • Still pushing out new data updates every couple of days for Colorado Springs and Boulder campuses; this is the faculty reporting season
    • Met earlier this week with the Laboratory for Atmospheric and Space Physics and UCAR, the University Consortium for Atmospheric research – UCAR has ~120 member universities and they want to explore a hybrid local and remote VIVO, since a significant number of their member institutions already have VIVO or VIVO-compatible systems
    • Also interested in Datastar, and will be attending the I-Fest
  • Cornell – Tim has continued to develop the horizontal tabbed interface as an alternative to vertical property groups
    • He's added a "view all" tab that Alex suggests could be set up to use a print stylesheet that could potentially include page breaks
      • What would the relationship be between a good print stylesheet and the DV-Docs CV export (implemented and live at Florida)? The CV export produces rich text to import directly into Word for further editing, so would have advantages, but a print screen option is likely not very much work and straightforward to use.
    • Question: is all the data for the page fetched when the page is loaded, or is data for the different tabbed sections only loaded as the user clicks on the tab?
      • Right now Tim has not modified how the data are loaded for the page, but "lazy loading" might be worth looking into as a way to address the performance of page loads for people with very large numbers of publications
    • still working on URITool
      • UF is missing some of the pieces like XSL
  • Duke – (Richard) Upgraded test environment to VIVO 1.5 but had a question about exporting a large model – kept running into memory problems, so tried exporting data in chunks by getting a handle on a dataset vs. a model – there are a few ways to get a query in there.  Brian – definitely want to use the actual SDB dataset object, not Jena models, or will miss out on the optimization that SDB does for limit queries.
    • Also found that when had been using named graphs, with 1.5 the graphs have to have real URIs, not just strings.  Modified the graph names in the SQL table and reactivated the graph again, and seemed to work. Brian – Yes, going forward the graph names have to be valid URIs
  • Florida – (Matt and Nicholas) Fixed the indexing problem – had to use a Java debugger and go through the SQL tables to find the source; a manual index would fail, and incremental indexes had not been working for a while.  The cause was a single form feed character pasted into a PDF file pasted into VIVO in March of 2011; cannot reproduce the error by pasting the same file back into more recent versions, but can if the data is uploaded as an n-triples file.
    • Now working on getting the logs to reflect manual edits, that previous versions used to log in the vivo.all.log – Brian suggests modifying the auditor, since none of the logging functionality is built into VIVO that was locally modified at UF; with 1.5 have one central place to listen to all the changes via the new RDF API, and the image editing must not yet be using that which is why it still shows up in the logs.  By switching to logging from the auditor in the RDF API, you should see all the edits again, but will have to filter out the inferred triples that are also inserted by the application at the same time. (See dev list messages)
  • Memorial University of Newfoundland – (Lisa and John) Just had a call with David Baker of CASRAI; talking about going to the October CASRAI conference in Ottawa and perhaps doing a half-day workshop on knowledge mobilization, as well for the chance to meet up with the VIVO team there.
    • Memorial University is adopting VIVO as the back end for its Yaffle tool – have a go ahead for a 2-year project with the understanding that they need to extend the VIVO ontology or come up with a separate add-on ontology that models knowledge brokering/knowledge management. The university solicits research topics that citizen groups, private companies, and government units around the province suggest for collaborations with researchers at the university.  The university has four workshops a year in Newfoundland and Labrador where researchers present and local officials and entrepreneurs discuss possible joint projects; these ideas are brought into Yaffle-VIVO through individual users, staff entry following events, and brokering connects opportunities with researchers and research units.
    • Want to be able to model both the actors and the opportunities, visualize them, and generate reports collaboratively and offer the models to other people interested in public engagement, knowledge transfer, and related activities, as demonstrated on their public engagement site.
    • Will be sending out a poll to the ontology, dev and implementation lists to solicit interest in working on the ontology modeling
  • NYU – (Yin) Looking at the VIVO development model and came up with a nice way of saying synchronized with the development master branch as checking in local changes to the search indexing and results display. Substantially replacing some sections of code as converting VIVO to search people only – redefining how the indexing and the weighting goes, using what is now still a prototype; now wants to be able to keep up with whatever changes continue in the dev version so won't have to continually. Will write it up and solicit feedback from others in comparison.
    • could be straightforward to swap out the way that indexing is done for people, but less clear how alternative indexing would work on other types of entities like publications, events, organizations, etc.
    • is there a branching model that is recommended?  Jim – holding pretty close to the Gitflow model; Yin – use that in-house already
  • Stony Brook – (Tammy) Erich has WebID working on sample code and still looking to bring into VIVO and will contribute the module.  Upgraded to the latest Tomcat 7 and JDK 7 and Erich is working with both of those.
  • UCSF – (Eric) Getting ready to release the RDF version of Profiles, that is compatible with the VIVO ontology, this weekend.  Will be doing a panel presentation on OpenSocial at the upcoming AMIA conference.  Not sure whether anybody else has gone live with the OpenSocial additions to VIVO?  Alex – has discussed features like the slideshare gadget at some meetings and gotten positive feedback. Eric – reviewers for the panel liked the concept of bringing content in from industry.  Right now are storing the data in a relational model but now sees how to store RDF from working with the RDF version of Profiles – wondering what should be in RDF vs. relational – some content wants to be available as linked data, while other data might be more private and/or just transactional support.  Has there been any thinking about that on the development team?
    • Jim – so far all the discussion has involved in storing as RDF; Jon – VIVO user accounts are stored in RDB, a different data store from where the public data is stored (SDB)
    • Eric – Loki stores everything in RDB and exports on request as RDF for linked open data; Profiles is more of a hybrid, in part due to its transition from a purely relational product, but also perhaps for performance reasons and because it uses a .Net stack. Profiles has kept passwords in RDB, for example.
  • Washington University – (Kristi) Interested in discussions about the Implementation Fest
  • Weill – (Paul) Still working to a deadline of getting publications clean enough to be used as the basis for a separate faculty reporting exercise. Are making very specific Scopus queries for each faculty member rather than relying on general Harvester queries.
    • still working on performance issues – noticed that Google API is loading for QR codes – maybe they should be lazy-loaded only

...