Script designed to work against the log output of the JMeter tests
One JMeter component (that outputs a graph) should not be configured while running a test due to the performance impact (and the graph output is not needed)
It would be useful if everyone were familiar with the R scripts, bringing R environments up to date, etc.
Action: Everyone should run the scripts in local R environment to build familiarity
Final test outputs can be attached to the test result pages
We will need to distill these results into an overall message
Update on release testing participation
A number of institutions are committed to testing future Fedora releases on an ongoing basis using their own production data
This can have an impact on performance and scale testing as some institutions will have large collections/datasets
We should add columns for committed institutions to the release testing pages so they can sign off on each release after testing
We will continue to reach out to more institutions to add to the list
Establishing environment baselines
Colin put together a script that identified characteristics of a test platform
Andrew put together a script to identify additional performance characteristics
Colin will refine JMeter tests to be short running rather than running until Fedora fails
We can run these while varying characteristics along the way (I/O, memory, etc.)
Establish characteristics between hardware and application performance
Actions
Colin Gross will look into putting together a script for baselining hardware and network characteristics to be factored into each test run.