Today's Meeting Times
Friendly reminders of upcoming meetings, discussions etc
- DSpace 7 Working Group: Next meeting is Thurs, March 21 at 15:00 UTC. Agenda: 2019-03-21 DSpace 7 Working Group Meeting
DSpace 7 Entities Working Group: Next meeting is TBD
- Last meeting notes at 2019-02-05 DSpace 7 Entities WG Meeting
- DSpace Developer Show and Tell Meetings: On hold until interesting topics arise.
If you have a topic you'd like to have added to the agenda, please just add it.
(Ongoing Topic) DSpace 6.x Status Updates for this week
- 6.4 will surely happen at some point, but no definitive plan or schedule at this time. Please continue to help move forward / merge PRs into the dspace-6.x branch, and we can continue to monitor when a 6.4 release makes sense.
- Upgrading Solr Server for DSpace (Mark H. Wood )
- PR https://github.com/DSpace/DSpace/pull/2058
- Docker configuration for external Solr
- The Dockerfile creates a new solr instance with 4 cores. It then overlays the schema and config changes in PR 2058.
- I attempted to create my branch so that I could create a PR back to Mark's branch, but some other changes from master seem to be showing up if I create a PR.
- This will need a small change to our docker compose files to invoke the external solr service. https://github.com/DSpace-Labs/DSpace-Docker-Images/pull/79
- DSpace Backend as One Webapp (Tim Donohue )
- PR: https://github.com/DSpace/DSpace/pull/2265 (PR is finalized & ready for review)
- A follow-up PR will rename the "dspace-spring-rest" module to "dspace-server", and update all URL configurations (e.g. "dspace.server.url" will replace "dspace.url", "dspace.restUrl", "dspace.baseUrl", etc)
- DSpace Docker and Cloud Deployment Goals (old) (Terry Brady )
Passing environment variables to Docker
Creating default AIP dataset for DSpace 7 docker load
- Tim shared a link to the entities WG dataset. This dataset contains no bitstreams. How should we handle this for the AIP's?
Update sequences on initialization
- Add Docker build/push to Travis
- We can revisit this after Docker is more widely adopted by DSpace developers. We can decide if travis is the right place to solve this.
- Brainstorms / ideas (Any quick updates to report?)
- Tickets, Pull Requests or Email threads/discussions requiring more attention? (Please feel free to add any you wish to discuss under this topic)
These topics are ones we've touched on in the past and likely need to revisit (with other interested parties). If a topic below is of interest to you, say something and we'll promote it to an agenda topic!
- Management of database connections for DSpace going forward (7.0 and beyond). What behavior is ideal? Also see notes at DSpace Database Access
- In DSpace 5, each "Context" established a new DB connection. Context then committed or aborted the connection after it was done (based on results of that request). Context could also be shared between methods if a single transaction needed to perform actions across multiple methods.
- In DSpace 6, Hibernate manages the DB connection pool. Each thread grabs a Connection from the pool. This means two Context objects could use the same Connection (if they are in the same thread). In other words, code can no longer assume each
new Context()is treated as a new database transaction.
- Should we be making use of
SessionFactory.openSession()for READ-ONLY Contexts (or any change of Context state) to ensure we are creating a new Connection (and not simply modifying the state of an existing one)? Currently we always use
SessionFactory.getCurrentSession()in HibernateDBConnection, which doesn't guarantee a new connection: https://github.com/DSpace/DSpace/blob/dspace-6_x/dspace-api/src/main/java/org/dspace/core/HibernateDBConnection.java
- Should we be making use of
- Bulk operations, such as loading batches of items or doing mass updates, have another issue: transaction size and lifetime. Operating on 1 000 000 items in a single transaction can cause enormous cache bloat, or even exhaust the heap.
- Bulk loading should be broken down by committing a modestly-sized batch and opening a new transaction at frequent intervals. (A consequence of this design is that the operation must leave enough information to restart it without re-adding work already committed, should the operation fail or be prematurely terminated by the user. The SAF importer is a good example.)
- Mass updates need two different transaction lifetimes: a query which generates the list of objects on which to operate, which lasts throughout the update; and the update queries, which should be committed frequently as above. This requires two transactions, so that the updates can be committed without ending the long-running query that tells us what to update.
Help us test / code review! These are tickets needing code review/testing and flagged for a future release (ordered by release & priority)Click here to expand...
Key Summary T Created Updated Assignee Reporter P Status Fix Version/s
Newly created tickets this week:Click here to expand...
Key Summary T Created Assignee Reporter P Status
Old, unresolved tickets with activity this week:Click here to expand...
Key Summary T Created Updated Assignee Reporter P Status
Tickets resolved this week:Click here to expand...
Key Summary T Created Assignee Reporter P Status Resolution
Tickets requiring review. This is the JIRA Backlog of "Received" tickets:Click here to expand...
Key Summary T Created Updated Assignee Reporter P