# At 12:00AM on January 1, "shard" the DSpace Statistics Solr index. Ensures each year has its own Solr index - this improves performance. 0 0 1 1 * [dspace]/bin/dspace stats-util -s
After running the statistics shard process, the "View Usage Statistics" page in DSpace will not immediately recognize the new shard. Restart tomcat to ensure that the new shard is included in usage statistics queries.
If you ran the shard process before upgrading to DSpace 5.7 or DSpace 6.1, the multi-value fields such as owningComm and onwningColl are likely be corrupted. Previous versions of the shard process lost the multi-valued nature of these fields. Without the multi-valued nature of these fields, it is difficult to query for statistics records by community / collection / bundle.
You can verify this problem in the solr admin console by looking at the owningComm field on existing records and looking for the presence of "\\," within that field.
The following process may be used to repair these records.
If you repeat the query that was run previously, the fields containing "\\," should now contain an array of owning community ids.
Technical implementation details
After sharding, the SOLR data cores are located in the [dspace.dir]/solr directory. There is no need to define the location of each individual core in solr.xml because they are automatically retrieved at runtime. This retrieval happens in the static method located in theorg.dspace.statistics.SolrLogger class. These cores are stored in the statisticYearCores list each time a query is made to the solr these cores are added as shards by the addAdditionalSolrYearCores method. The cores share a common configuration copied from your original statistics core. Therefore, no issues should be resulting from subsequent ant updates.
The actual sharding of the of the original solr core into individual cores by year is done in the shardSolrIndex method in the org.dspace.statistics.SolrLogger class. The sharding is done by first running a facet on the time to get the facets split by year. Once we have our years from our logs we query the main solr data server for all information on each year & download these as csv's. When we have all data for one year we upload it to the newly created core of that year by using the update csvhandler. One all data of one year has been uploaded that data is removed from the main solr (by doing it this way if our solr crashes we do not need to start from scratch).