Page tree
Skip to end of metadata
Go to start of metadata




  1. Update on graphs and summaries of completed tests
  2. Getting community sponsorship for testing infrastructure

    1. Developing a prospectus (goals, system requirements, description of tests, personnel)
    2. Institutions to approach
  3. Establishing environment baselines: Colin Gross
  4. Finalize initial test results: Danny Bernstein
  5. ...


  1. Update on graphs and summaries of completed tests
    AW: Danny not present. Will postpone discussion. Results collected from previous tests documented on linked page. Final agenda item is to pull together clear summary of tests and simple graphs indicating how performance changes over course of tests. Danny wanted to take this on. R scripts were created years ago to consume output of JMeter tests: In the absence of any other tooling, may be useful. Is there anyone who has interest in pulling together results? Are there comments on results?
    CG: Experience with R. Will volunteer.
  2. Getting community sponsorship for testing infrastructure
    AW: Folks at Art Institute of Chicago interested in participating in release testing. For each release, there is a list of testing activities. Once sanity testing is done, institutions that have the capacity would be in position to run scale tests. ARTIC ran scale tests against 4.7.0 that hit on a particular issue. Contacts: Stefano Cossu, Kevin Ford.
    NR: Access to servers with capacity to do scale testing on behalf of York.
    CG: Michigan's use cases are small.
    YN: Interested in finding out if Yale might be willing to contribute resources.
  3. Establishing environment baselines
    CG: Model time it takes to run tests based on benchmarks of operating system. First pass: How long it takes to write to disk. How long it takes with Fedora. Model is a weight. Faster disks predict faster performance. Linear or logarithmic? Goal is to normalize testing across conditions. Currently working on I/O with JUnit tests. Possibly, other tests for memory, network, CPU.
    AW: Topic has come up a few times. Tests so far have been capacity tests (small files, large files, empty Fedora resources, Fedora resources with RDF). Another suite of tests could focus on demonstrating particular aspects of performance, to be run in a short amount of time but long enough to characterize performance. Slice off classes or methods in integration tests for the REST API.
    AW: Logic at ModeShape layer has been biggest determinant in previous testing. Lower level environment characteristics may not be visible in test results.
    CG: May only be, say, 10%, but will know how to consider in interpreting test results, or if bigger issue, say, at 20%.
    CG: Has tried HTTP API integration tests. Will try JMeter performance tests; may need help limiting runs.

    AW: JMeter framework is comprehensive and should have straightforward way of limiting runs. Hardest part may be getting JMeter to run; raise questions on IRC.
    CG: Will run tests on local machines; take a stab at regression modeling.
  4. Finalize initial test results
    AW: Goal is to bring together summaries of tests run across different environments; elevate to mailing lists. Summaries to be linked from: Performance and Scalability Test Plans
    AW: Script for outputting characteristics of system:
    CG: Can produce HTML output from R using vector graphics. Will add system characteristics to output.