This page has proved very difficult to keep up to date; even so it contains useful information.  Later this year it will be replaced with a page suggesting example configurations to address a range of different use cases.

 

Answers to the following questions may be useful to others:

Stanford, as of Nov 29, 2011

~ 200,000 Objects

We're currently using VM's from a pool of Xen Hypervisor servers. Each server has the following specs:

Here are the VM's we've created, all using Red Hat Enterprise Linux 5, each set replicated in dev, test and production environments:

We have Oracle 11G running on its own Sun x4200, AMD Opteron 4core@2.4Ghz, 8GB RAM, RHEL 5.3, Oracle on NFS.

In the next few months, we will move the VMs to a pool of VMWare servers, with each VM running RHEL 6. We will allocate more CPUs to the Fedora VM or create a dedicated Solr VM.

University of Virginia

For Libra, the unmediated self-deposit repository for scholarly work, we have approximately 50 objects total as of Nov 29 2011. 

The production environment is a VM running in the University's Information Technology Services production cluster.  The OS is Fedora 13 and the VM is currently assigned the following resources: 

Answers to other questions:

University of Hull

The Hydra at Hull systems are implemented on a VM's within a large campus VMWare ESX infrastructure  

The test and production servers are, in fact triple VM instances because we separate out Solr, Fedora and the Hydra stack each onto its own machine:

I suppose really there are fourth machines lurking in the background because the SQL stuff is sent out to the University’s ‘central’ SQL cluster. Our dev server actually has all three components on one VM.

Hull is basically a Microsoft ‘shop’, hence the OS for two of the three machines, but we needed to implement Hydra on Linux to get everything performing as it should.

Last updated: 2013-04-16

Answers to other questions:

Rock and Roll Hall of Fame and Museum

I run 5 VMs on a cluster of two Dell servers. Here are the details about each node:

The VMs live on IBM DS3500 SAS disk array, attached via iSCSI. Here's a breakdown of the systems and services on each VM:

There is a third physical machine, an IBM 3650 running Red Hat EL 5, which manages disk and tape storage, and hosts a couple of NFS shares that the VMs mount. This is where Fedora is putting its data, as well as the video data stored externally to Fedora.

Other bits:

Penn State (as of October 2013)

 

Notre Dame (October 2012)

For production we use three servers, each having 32 GB of RAM and two 2.8 GHz CPUs. The servers are deployed as a cluster, using RedHat KVM. We are running Fedora, Solr, and Apache+Passenger+Rails on separate machines.

On the Apache machine we have a separate instance of Apache for each Hydra head (about 6), since each head was developed against a different version of Hydra. Each head is using REE 1.8.6. We switching to Ruby 1.9.3 for all new development, and plan to upgrade all the heads to use it over time.

We also have a similar setup for pre-production testing. We use Jenkins for CI and deploy using Capistrano.

Overall the repository contains about 11,500 objects using about 44 GB on a NetApp SAN.

Yale (September 2013)

Production environment - entire Hydra stack on single server: Dell PE R710 – with 72Gb memory, 2 x Intel Xeon X5660 2.8Ghz processors, and 2 x 160GB internal HDD’s.  Data storage (the fedora_store for objects and datastreams) is provided by an NFS mounted volume from the Library’s SAN. 

We plan to split this into ingest taking place on the above hardware virtually 24/7 and then we will have a VM that mirrors the setup and offers a read only SOLR index that is updated periodically throughout the day. This will be our public front end. When the time comes, we will load balance by replicating this VM and moving to a SOLR cloud setup as well as a clustered MySQL instance.

In addition we have a number of VMs so that each development team member has their own dedicated box with a mirror of the production setup, just less resources attached.

 

UC San Diego (January 2015)