You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

A harness to evaluate performance for Fedora Futures platform candidates.

 

Test Data

Set 1: Digital Corpora govdocs

Set 2: OpenPlanets

Set 3: Random binary data created from a stable set of filesizes

 

The govdocs dataset includes (…), (…some characteristics, e.g. N PDF documents, varying in size from X to Y)

 

The OpenPlanets dataset …

 

(Description of fixture processing, generation of bagits)

 

The Random binary data set

The set is created by the script https://github.com/futures/ff-fixtures/blob/master/create_random_files.sh which writes the files to objects/random and creates the necessary manifest-md5.txt file used by the JMeter Tests at https://github.com/futures/ff-jmeter-madness

It works by using some standard GNU commands including dd, rm and md5sum and iterates over er the filesizes in https://github.com/futures/ff-fixtures/blob/master/random_sizes.data in order to create one file of the given size in megabytes. This to a certain extend ensures the comparability of the measurements, since exactly the same number of files with the same number of bytes is created each time the data set is generated.

In order to create the binary test data set checkout the project https://github.com/futures/ff-jmeter-madness first:

git clone https://github.com/futures/ff-jmeter-madness

then init and update the submodules:

git submodule init && git submodule update

this will checkout the submodule fixtures containing the script create_random_data.sh.

Switch to the fixtures subdirectory:

cd fixtures

and run the script:

./create_random_data.sh

this will create the directory objects/random and, using dd, create the random binaries as objects/random/random_N.data.

Additionally a file manifest-md5.xml is generated which is employed by the JMeter tests to find the random binaries for uploading them via HTTP requests

Now you can fire up JMeter and open the JMX file containing the test plan.

Ingest Test

1. For each "bag", create an "object" (i.e. whatever the equivalent is for the platform candidate)

2. For each resource described in the bag's manifest, add a "datastream" (again, whatever the equivalent is for the platform candidate).

 

Update Test

 

 

 

  • No labels