Date: Thu, 28 Mar 2024 04:13:56 -0400 (EDT) Message-ID: <272430101.27114.1711613636502@lyrasis1-roc-mp1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_27113_1181234356.1711613636500" ------=_Part_27113_1181234356.1711613636500 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
Update on graphs and summaries of completed tests
https://wiki.duraspace.org/display/FF/2016-08-15+Performance+-+Scale+m= eeting#id-2016-08-15Performance-Scalemeeting-CurrentSummaries
Getting community sponsorship for testing infrastructure
Establishing environment baselines
CG: Model time it takes to run tests based on benchmarks of operating syst=
em. First pass: https://gist.gi=
thub.com/grosscol/f997f2b3cef80edb640266a03f829a77. How long it takes t=
o write to disk. How long it takes with Fedora. Model is a weight. Faster d=
isks predict faster performance. Linear or logarithmic? Goal is to normaliz=
e testing across conditions. Currently working on I/O with JUnit tests. Pos=
sibly, other tests for memory, network, CPU.
AW: Topic has come up a few times. Tests so far have been capacity tests (=
small files, large files, empty Fedora resources, Fedora resources with RDF=
). Another suite of tests could focus on demonstrating particular aspects o=
f performance, to be run in a short amount of time but long enough to chara=
cterize performance. Slice off classes or methods in integration tests for =
the REST API.
AW: Logic at ModeShape layer has been biggest determinant in previous test=
ing. Lower level environment characteristics may not be visible in test res=
ults.
CG: May only be, say, 10%, but will know how to consider in interpreting t=
est results, or if bigger issue, say, at 20%.
CG: Has tried HTTP API integration tests. Will try JMeter performance test=
s; may need help limiting runs.