You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Introduction

This document describes the technical aspects of the testing integrated into Dspace. It's divided in 3 main sections, one per each type of test added to Dspace, plus a section discussing the common tools used. It's intended to serve as a reference for the community so more test cases can be created.

Common Tools

There is a set of tools used by all the tests. These tools will be described in this section.

Maven

The build tool for DSpace, Maven, will also be used to run the tests. For this we will use the Surefire plugin, which allows us to launch automatically tests included in the "test" folder of the project. We also include the Surefire-reports plugin in case you are not using a Continous Integration environment that can read the output and generate the reports.

The plugin has been configured to ignore test files whose name starts with "Abstract", that way we can create a hierarchy of classes and group common elements to various tests (like certain mocks or configuration settings) in a parent class.

Tests in Maven are usually added into src/test, like in src/test/java/<package> with resources at src/test/resources.

To run the tests execute:

mvn test

The tests will also be run during a normal Maven build cycle. To skip the tests, run Maven like:

mvn package -Dmaven.test.skip=true

By default we will disable running the tests, as they might slow the compilation cycle for developers. They can be activated using the command

mvn package -Dmaven.test.skip=true

or by commenting the corresponding profile (skiptests) in the main pom.xml file, at the root of the project.

JUnit

JUnitis a testing framework for Java applications. It was one of the first testing frameworks for Java and it's a widespread use in the community. The framework simplifies the development of unit tests and the current IDE's make even easier building those tests from existing classes and running them.

Junit 4.8.1 is added as a dependency in the parent project. The dependency needs to be propagated to the subprojects that contain tests to be run.

As of JUnit 4.4, Harmcrest is included. Harmcrest is a library of matcher objects that facilitate the validation of conditions in the tests.

JMockit

JMockit is popular and powerful mocking framework. Unlike other mocking frameworks it can mock final classes and methods, static methods, constructors and other code fragments that can't be mocked using other frameworks.

JMockit 0.998 has been added to the project to provide a mocking framework to the tests.

Unit Tests

These are tests which test just how one object works. Typically test each method on an object for expected output in several situations. They are executed exclusively at the API level.

We can consider two types of classes when developing the unit tests: classes which have a dependency on the database and classes that don't. The classes that don't can be tested easily, using standard procedures and tests. Our main problem are classes tighly coupled with the database and its helper objects, like BitstreamFormat or the classes that inherit from DSpaceObject. This section focussing on unit tests means we don't have any database available, which in turn means we would not be able to test some of the methods. We have the following options:

* Ignore the methods

* Create some mocks for the database and helper objects

The second option is prefered, although the fact we are using mocks means we won't have errors as we will be getting a constant valid result. At this point there is a choice: we can create big standard mock objects (with fixed output) to be used in all our tests, or we can create custom mocks for every test. The first option means we won't be able to test error cases as the output will be always the same for a given mocked call in a mocked object. The second option ties our mocks to the current implementation of the method, as we would need to create specific mocks for the methods called from the method we are testing.

To preserve the encapsulation, and given we will have a suite of integrationt tests that will deal with the database-related methods properly, the first option has been implemented. As a result a set of objects (MockContext, MockDatabaseManager, MockResultSet and more) has been created and used for the tests. All the methods (like DatabaseManager.find() ) that depend on the database will return mock objects with a constant value, and the real testing will be deferred to the Integration Tests, keeping the unit test just as a thin verification layer against big errors in code. The other methods, which don't depend on the database, will be tested normally.

There is a base class called "AbstractUnitTest". This class contains a series of mocks and references which are necessary to run the tests in DSpace, like mocks of the Context object. All Unit Tests should inherit this class, located under the package "org.dspace" in the test folder of DSpace-api.

About the implementation, several objects only offer a hidden constructor and a factory method to create an instance of the object. As this factory method depends on the database, we can't rely on it to create instances of our objects. The Reflection API has been used to create the required instances for the unit tests.

To summarise, the following issues have been detected in the code, which make Unit Testing harder and impact the maintability of the code:

* Hidden dependencies. Many objects require other objects (like DatabaseManager) but the dependency is never explicitely declared or set. These dependencies should be fulfilled as parameters in the constructors or factory methods.

* Hidden constructors. It would be advisable to have public constructors in the objects and provide a Factory class that manages the instantiation of all required entities. This would facilitate testing and provide a separation of concerns, as the inclusion of the factory methods inside objects usually adds hidden dependencies (see previous point).

Refactoring would be required to fix these issues.

Integration Tests

These tests work at the API level and test the interaction of components within the system. Some examples, placing an item into a collection, or creating a new metadata schema and adding some fields. Primarily these tests operate at the API level ignoring the interface components above it.

Functional Tests

These are tests which come from user-based use cases. Such as a user wants to search DSpace to find material and download a pdf. Or something more complex like a user wants to submit their thesis to DSpace and follow it through the approval process. These are stories about how users would preform tasks and cover a wide array of components within Dspace.

Thanks

This page has been created with help from Stuart Lewis, Scott Phillips and Gareth Waller. I want to thank them all for their comments. Some information has been taken from Wikipedia to make the text more complete. I'm to blame for errors in the text.

Feel free to contribute to this page!

  • No labels