Setup

  1. Version of Fedora
    1. 4.5.1-SNAPSHOT, build #b73f270   (2016-01-21)

  2. Fedora configuration details
    1. $CATALINA_HOME/bin/setenv.sh

      JAVA_OPTS="-Dfedora-dev.tomcat -Dfcrepo.home=/fedora-dev/data/fedora -Djava.io.tmpdir=/fedora-dev/data/tomcat/tomcat-temp -Djava.awt.headless=true -Dfile.encoding=UTF-8"
      JAVA_OPTS="${JAVA_OPTS} -Xms512m -Xmx6g -XX:NewSize=256m -XX:MaxNewSize=2g -XX:MetaspaceSize=64m -XX:MaxMetaspaceSize=2g" 
      JAVA_OPTS="${JAVA_OPTS} -XX:+DisableExplicitGC"
      
      ## GC Debugging
      JAVA_OPTS="${JAVA_OPTS} -Xloggc:/fedora-dev/logs/tomcat/java-gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
      
      export JAVA_OPTS
      
      CATALINA_PID=/fedora-dev/logs/tomcat/catalina.pid
      export CATALINA_PID
      CATALINA_TMPDIR=/fedora-dev/data/tomcat/tomcat-temp
      export CATALINA_TMPDIR
      CATALINA_OUT=/fedora-dev/logs/tomcat/catalina.log
      export CATALINA_OUT
  3. System details (OS, memory, processors, hardware specs or virtualization, JVM, etc)
    1. VMWare CentOS Linux release 7.2.1511 (Core)

    2. 8GB RAM, 2 CPUs:  Intel Xeon CPU E5640  @ 2.67GHz
    3. Java SE Runtime Environment (build 1.8.0_74-b02), Java HotSpot 64-Bit Server VM (build 25.74-b02, mixed mode)

    4. Tomcat 8.0.32
  4. Initial state of the repository
    1. empty
  5. Number of client processes/threads (on separate machine)
    1. 1

Test

Command:

date > summary.log; jmeter-2.13/bin/jmeter -Dfedora_4_server=libdsgp1 -Dfedora_4_port=80 -Dfedora_4_context=fcrepo/rest -n -t ./fedora.jmx >> summary.log; date >> summary.log

 

The test ran from Thu Mar 03 16:46:57 CST 2016 to Sat Mar 05 09:19:10 CST 2016

Results

Host CPU usage (percentage)

Host Memory usage (percentage)

 

3 Comments

  1. Tomcat 8 shows these possible memory leak warnings upon shutdown:

     

    Tomcat warnings
    03-Mar-2016 15:21:52.884 WARNING [fedora-dev.library.wisc.edu-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [fcrepo] appears to have started a thread named [Listener:32781] but has failed to stop it. This is very likely to createa memory leak. Stack trace of thread:
     java.net.PlainSocketImpl.socketAccept(Native Method)
     java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     java.net.ServerSocket.implAccept(ServerSocket.java:545)
     java.net.ServerSocket.accept(ServerSocket.java:513)
     com.arjuna.ats.internal.arjuna.recovery.Listener.run(Listener.java:122)
    03-Mar-2016 15:21:52.885 WARNING [fedora-dev.library.wisc.edu-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [fcrepo] appears to have started a thread named [Transaction Reaper] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
     java.lang.Object.wait(Native Method)
     com.arjuna.ats.internal.arjuna.coordinator.ReaperThread.run(ReaperThread.java:90)
    03-Mar-2016 15:21:52.886 WARNING [fedora-dev.library.wisc.edu-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [fcrepo] appears to have started a thread named [Transaction Reaper Worker 0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
     java.lang.Object.wait(Native Method)
     java.lang.Object.wait(Object.java:502)
     com.arjuna.ats.arjuna.coordinator.TransactionReaper.waitForCancellations(TransactionReaper.java:321)
     com.arjuna.ats.internal.arjuna.coordinator.ReaperWorkerThread.run(ReaperWorkerThread.java:65)
  2. Test began at Thu Mar 03 16:46:57 CST 2016 (1457045217793), ended at Sat Mar 05 09:19:10 CST 2016 (1457191150667)

    It looks like the test aborted right after an Infinispan file-not-found exception:  https://gist.github.com/sprater/ad6b34afe3f37d722dc0

    I also received a thread dump and a number of SEVERE thread memory leak warnings when I shut down the Tomcat server this morning:  https://gist.github.com/sprater/a752b5c546426eb48e63

    Disk space appears not to have been an issue:  only 11% of the files system storage is occupied.  I was able to stop, restart the server, and continue ingesting a few files afterwards.

  3. The potential memory leak appears to be associated with ActiveMQ, which seems to be a separate issue from the test abortion. 

    Can I request that you re-run your test with a JDBC backend (either MySQL or PostgreSQL) instead of LevelDB, as documented here:

    It looks like the issue you hit may be related to the default LevelDB.