Fedora 4 Clustering Testing Summary

Virginia Tech finished the Assessment Plan - Clustering and report our works in this page. The detail step-to-step procedures was documented in Fedora Cluster Installation in AWS.

We applied read-only and read-dominant workloads to replicated Fedora 4 clusters. The write operation do not present as a bottleneck in our experiments. When the number of nodes increases, the maximum read load the cluster can handle also increases linearly. The read latency does not change significantly, although the write latency shows signs of increase when the number of replicated nodes increases. These results should give us high confidence on using replicated Fedora 4 cluster to accommodate higher read workload.

Testing object 1: Verify the load balanced cluster setup using the updated Modeshape and Infinispan configuration.

Outcome:  The Fedora 4 cluster can setup successfully in Amazon AWS using the updated Modeshape and Infinispan configuration (Deploying in AWS). Detail procedure please see  2014-10-14 Acceptance Test - High Availability cluster.

 

Testing object 2Demonstrate the nodes joining and node leaving in the cluster.

Outcome: Once a node is configured to replication mode, it is easily to add that node into the cluster. Through the AWS EC2 load balancer, it is easily to add/remove node from the cluster. The load balancer can distribute the traffic evenly. The detail procedure please see 2014-10-14 Acceptance Test - High Availability cluster

 

Testing object 3: Measure the single and cluster's throughput and response time.

Outcome: We use JMeter to simulate 100 users requesting Fedora server at the same time for a period. When the request are under the server load, the read requests response time are the same in both single and cluster.

Test Setup
Num. Requests
Test Duration (Seconds)
Average Response Time (ms)
Individual10006038
Cluster (3)10006038
Cluster (4)10006038
Individual30006021
Cluster (3)30006027
Cluster (4)30006026
Individual60006021
Cluster (3)60006024
Cluster (4)60006023
Individual100006021
Cluster (3)100006023
Cluster (4)100006021

Detail report: Response Time Comparison of Single Fedora VS Clusters

 

Testing object 4: Continue increasing the load until server is unable to handle further requests from the client.

Outcome: When the requests are approximately 525~550 requests per second, a single Fedora instance is start unable to accept further requests. 

 

Testing object 5Exam if the n-node (n >= 2) cluster handles n times as much requests as a single instance.

Outcome: We conducted load test on single, 2 nodes cluster, 3 nodes cluster, and 4 nodes cluster. Below table shows number of requests which server can response all requests completely. 

 SingleCluster (n=2)Cluster (n=3)Cluster (n=4)
reqs/sec525650725810

Testing object 6: n-JMeter (n=2) clients to send requests in different region/availability zone and measure the response time.

Outcome: There are 4 zone, which are US-East, US-West, Asia and EU. In each zone we run 2 JMeters and send requests to Fedora Cluster hosted in North Virginia. We average the response time for each zone and the response time summarized as below.

 US-EASTUS-WESTASIAEU
100 reqs/sec5ms85ms284ms113ms
300 reqs/sec5ms88ms278ms109ms
600 reqs/sec5ms86ms273.5ms107ms
1000 reqs/sec5ms90ms271.5ms112ms

 

Testing object 7: Simulate real world case, 95% read requests and 5% write requests. 

Outcome: To simulate real world case, we use a JMeter to send in a read/write(create/delete) ratio (95%/5%)  per second and measure the average Fedora 4 cluster (3 nodes) response time.   

 

 ReadWriteActionNode
Only Read reqs5ms  4
Only Write reqs 13msCreate4
Only Write reqs 190msDelete4
Only Read reqs5ms  3
Only Write reqs 21msCreate3
Only Write reqs 66msDelete3
Only Read reqs4ms  1
Only Write reqs 13msCreate1
Only Write reqs 63msDelete1
 ReadWriteActionNode
R-reqs/W-reqs (95%/5%)4ms14msCreate4
R-reqs/W-reqs (95%/5%)5ms216msDelete4
R-reqs/W-reqs (95%/5%)7ms29msCreate3
R-reqs/W-reqs (95%/5%) 5ms

61ms

Delete3
R-reqs/W-reqs (95%/5%)

4ms

15msCreate1
R-reqs/W-reqs (95%/5%) 4ms63msDelete1

 

Below table shows number of requests which server can response all requests completely under pressure test. The read requests which server can handle reduced a little, however the write requests remains the same.  

 SingleCluster (n=2)Cluster (n=3)Cluster (n=4)
RW-reqs/sec480590680780

This figure shows the difference between read-only and read-write. Blue line shows read-only and green line shows read-write.

  • No labels

1 Comment

  1. Yinlin Chen, could you provide definitions of the x and y-axis units for the graphs in section: "testing object 4"?

    For example:

    • What is "VU" in the graph "Delivered Load"?
    • In the graph "Response Codes Over Time", you have the y-axis marked twice in units of TPS, one from 0 to 800, and the other from 0 to 4... not sure how to interpret this.