Page tree
Skip to end of metadata
Go to start of metadata

Time/Place

Attendees 

Agenda

  1. Review assessed relevance and reusability of prior work in Fedora testing
    1. F4 performance benchmarking (summary)
    2. Unimplemented "Technical Working Group" performance assessment plan
  2. Establish consensus on categories of next round of F4 performance benchmarking
  3. Define actions towards initial benchmarking
  4. Define actions towards collecting representative datasets and infrastructure
  5. Next meeting? Tues Dec 1st or Thurs Dec 3rd?

Minutes

  1. Prior performance benchmarking and assessment work
    1. Three of the performance areas highlighted previously have not yet been sufficiently re-tested
      1. total data size
      2. ingest rate
      3. LDP/SPARQL Update performance (per Hydra practice)
    2. Clustering
      1. Primary use case seems to be high availability
      2. What increased scale clustering affords is unclear, partly because we haven't yet fully probed how far a single instance scales (as a baseline)
    3. Would be good to know how performance (response time) changes:
      1. as a file size increases
      2. as # of files increases
      3. as # of resources/containers increases
    4. We endeavor here to establish a process & baselines for the more isolated tests (1a in the agenda) so that we can make progress on "real-world"-type tests (1b in the agenda)
      1. How should we treat other axes?
        1. authorization
        2. transactions
        3. concurrency
        4. versioning
      2. Initial questions which tests should answer
        1. How does performance change as the size of the file increases?

        2. How does performance change as the number of files increases?

        3. How does performance change as the number of objects increases?

        4. How does performance change as the number of mixed resources increases?
          Note: In all of these cases, "performance" will be measured by requesting CRUD operations after every x-number of ingest events.

      3. Decision: Defer at first, then examine later once the process & baselines are clear.
      4. Decision: Process should include writing a number of objects/files into the repository (testing the speed of the writes)
        1. Every so often (# of writes), test a suite of operations (gets, deletes) to see how the speed of those change
        2. That way we test reading, writing, and a number of other operations as overall repository size increases.
  • No labels