Date: Tue, 19 Mar 2024 04:39:00 -0400 (EDT) Message-ID: <1725045480.8705.1710837540571@lyrasis1-roc-mp1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8704_1085818945.1710837540571" ------=_Part_8704_1085818945.1710837540571 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
This API refactoring was approved for = the DSpace 6.0 release
This Service based API refactoring was approved for release in DSpace 6.= 0 via a vote taken on the 'dspace-deve= l' mailing list.
What is the Service-based API refactor= ing?
The Service-based API refactoring is a major refactoring of the "dspace-= api" (DSpace's Java API) to better support "separation of concerns/responsi= bilities". Simply put, often, in the existing (5.x and below) API, there is= an intermingling of business logic and database logic which is difficult t= o maintain, debug and/or build against. One of the most obvious examples is= how we deal with database software support (PostgreSQL vs. Oracle), but su= ch intermingling of logic exists in many of our core classes. The DSpace "S= ervice-based API" attempts to tease apart the database logic from the busin= ess logic into separate layers, while also adding support for Hibernate.= The goal is to provide an easier to maintain, more modular API, whil= e also enhancing how we deal with database logic in general (via Hibernate)= .
Additional Resources
This Service API was presented/discussed in a Special Topic Developer Me= eting on July 23, 2015. Slides and video from that meeting are now av= ailable:
Module refactoring now COMPLETE!
All existing DSpace interfaces/user interfaces have been refactored to s= upport the new API, and all unit tests succeed. This refactoring took place= on the DS-2701-service-api feature= branch.
We'd like to thank the following individuals for their hard wor= k in helping to refactor and test our various modules to support this new A= PI:
The DSpace API was originally created in 2002. After working with the AP= I day in day out for over 7 years I felt it was time for a change. In this = proposal I will highlight the issues that the current API has and suggest s= olutions for these issues.
Currently every database object (more or less) corresponds to a single j= ava class. This single java class contains the following logical code block= s:
Working with these types of GOD classes, we get the following disadvanta= ges:
As a consequence of this, making changes to a database table becomes mor= e complex since it is unclear which classes have access to the database tab= le. Below is a schematic overview of the usage of the item class:
String params= =3D "%"+query.toLowerCase()+"%"; StringBuffer queryBuf =3D new StringBuffer(); queryBuf.append("select e.* from eperson e " + " LEFT JOIN metadatavalue fn on (resource_id=3De.eperson_id AND fn.= resource_type_id=3D? and fn.metadata_field_id=3D?) " + " LEFT JOIN metadatavalue ln on (ln.resource_id=3De.eperson_id AND = ln.resource_type_id=3D? and ln.metadata_field_id=3D?) " + " WHERE e.eperson_id =3D ? OR " + "LOWER(fn.text_value) LIKE LOWER(?) OR LOWER(ln.text_value) LIKE LO= WER(?) OR LOWER(email) LIKE LOWER(?) ORDER BY "); if(DatabaseManager.isOracle()) { queryBuf.append(" dbms_lob.substr(ln.text_value), dbms_lob.substr(fn.te= xt_value) ASC"); }else{ queryBuf.append(" ln.text_value, fn.text_value ASC"); } // Add offset and limit restrictions - Oracle requires special code if (DatabaseManager.isOracle()) { // First prepare the query to generate row numbers if (limit > 0 || offset > 0) { queryBuf.insert(0, "SELECT /*+ FIRST_ROWS(n) */ rec.*, ROWNUM rnum = FROM ("); queryBuf.append(") "); } // Restrict the number of rows returned based on the limit if (limit > 0) { queryBuf.append("rec WHERE rownum<=3D? "); // If we also have an offset, then convert the limit into the maxim= um row number if (offset > 0) { limit +=3D offset; } } // Return only the records after the specified offset (row number) if (offset > 0) { queryBuf.insert(0, "SELECT * FROM ("); queryBuf.append(") WHERE rnum>?"); } } else { if (limit > 0) { queryBuf.append(" LIMIT ? "); } if (offset > 0) { queryBuf.append(" OFFSET ? "); } }
// Should we = send a workflow alert email or not? if (ConfigurationManager.getProperty("workflow", "workflow.framework").equa= ls("xmlworkflow")) { if (useWorkflowSendEmail) { XmlWorkflowManager.start(c, wi); } else { XmlWorkflowManager.startWithoutNotify(c, wi); } } else { if (useWorkflowSendEmail) { WorkflowManager.start(c, wi); } else { WorkflowManager.startWithoutNotify(c, wi); } }
if(Configurat= ionManager.getProperty("workflow","workflow.framework").equals("xmlworkflow= ")){ // Remove any xml_WorkflowItems XmlWorkflowItem[] xmlWfarray =3D XmlWorkflowItem .findByCollection(ourContext, this); for (XmlWorkflowItem aXmlWfarray : xmlWfarray) { // remove the workflowitem first, then the item Item myItem =3D aXmlWfarray.getItem(); aXmlWfarray.deleteWrapper(); myItem.delete(); } }else{ // Remove any WorkflowItems WorkflowItem[] wfarray =3D WorkflowItem .findByCollection(ourContext, this); for (WorkflowItem aWfarray : wfarray) { // remove the workflowitem first, then the item Item myItem =3D aWfarray.getItem(); aWfarray.deleteWrapper(); myItem.delete(); } }
Take for example the WorkflowManager, this class consist of only static = methods, if a developer for an institution wants to alter one method the en= tire class needs to be overridden in the additions module. When the Dspace = needs to be upgraded to a new version it is very difficult to locate the ch= anges in a big class file.
DSpace caches all DSpaceObjects that are retrieved within a single lifes= pan of a context object (can be a CLI thread of a page view in the UI). The= cache is only cleared when the context is closed (but this is coupled with= a database commit/abort) or when explicitly clearing the cache or removing= a single item from the cache. When writing large scripts one must always k= eep in mind the cache, because if the cache isn't cleared DSpace will keep = running until an OutOfMemory exception occurs. So contributions by users wi= th a small dataset could lead to memory leaks.
When retrieving a bundle from the database it automatically loads in all= bitstreams linked to this bundle, for these bitstreams it retrieves the bi= tstreamFormat. For example when retrieving all bundles for an item to get t= he bundle names all files and their respective bitstream formats are loaded= into memory. This leads to a lot of obsolete database queries which aren't= needed since we only need the bundle name.
To fix the current problems with the API and make it more flexible and e= asier to use I propose we split the API in 3 layers:
This layer would be our top layer, will be fully public and used by the = CLI, JSPUI & XMLUI, .... Every service in this layer should be a statel= ess singleton interface. The services can be subdivided into 2 categories&n= bsp;database based services and business logic blo= ck services.
The database based services are used to interact w= ith the database as well as providing business logic methods for the databa= se objects. Every table in DSpace requires a single service linked to it.&n= bsp;The services themselves will not perform any database queries, it will = delegate all database queries to the Database Access Layer discussed below.= So for example the service for the Item class would contain methods for mo= ving it between collections, adjusting the policies, withdrawing, ... these= methods would be executed by the service itself while the database access = methods like create, update, find, ... would add business logic to check th= e authorisations but delegate the actual database calls to the database acc= ess layer.
The business logic services are replacing the old = static DSpace Manager classes. For example the AuthorizeManager has been re= placed with the AuthorizeService which contains the same methods and all th= e old code except that the AuthorizeService is an interface and a conc= rete (configurable) implemented class contains all the code. This class can= be extended/replaced by another implementation without ever having to refa= ctor the use of the class.
As discussed above to continue working with a static DatabaseManager cla= ss that contains if postgres else oracle code will lead to loads of bugs, t= herefore I propose we replace this =E2=80=9Cstatic class=E2=80=9D by a new = layer.
This layer is called the Database Acces Object layer (DAO layer). It con= tains no business logic and it's sole responsibi= lity is to provide database access for each table (CRUD (c= reate/retrieve/update/delete)). This layer consist entirely of interfa= ces which are automagically linked to implementations by using spring. The = reason for interfaces is quite simple, by using interfaces we can easily re= place our entire DAO layer without actually having to alter our service lay= er. So if one would like to use another ORM framework than the default= , all that needs to be done is implement all the interfaces and configure t= hem, no code changes are required to existing code.
Each database table has its own = DAO interface as opposed to the old DatabaseManager class, this is to suppo= rt Object specific CRUD methods. Our metadataField DAO interface has a find= ByElement() method for example while an EPerson DAO interface would require= a findByEmail() retrieval method. These methods are then called by the ser= vice layer which has similar methods.
In order to avoid queries to the= DAO layer from various points in the code each DAO can only be uti= lised from a single service. Linking = a DAO to multiple services will result in the messy separation we are tryin= g to avoid.
The name of each class must end with = DAO. A s= ingle database table can only be queried from a single DAO instance.
Each table in the database that = is not a linking table (collection2item, community2collection) is represent= ed by a database object. This object contains no business logic and has set= ters/getters for all columns (these may be package protected if business lo= gic is required). Each Data Object has its own DAO interface as oppose= d to the old DatabaseManager class, this is to support Object specific CRUD= methods.
Below is a schematic representat= ion of how a refactored database based object class w= ould look (using the simple MetadataField class as an example):
Below is a schematic representation of how a refactored
The public layers objects are the only objects that can be accessed from= other classes, the internal objects represent objects whose only usage is = documented below. This way the internal usage can change in its entirety&nb= sp;without ever affecting the DSpace classes that use the api.
Nevertheless, we are convinced the choice for Hibernate is not necessari= ly a permanent one, since the proposed architecture easily allows replacing= it with another backend ORM implementation or even a JDBC based one.
Each non linking database table in DSpace must be represented by a singl= e class containing getters & setters for the columns. Linked objects ca= n also be represented by getters and setters, below is an example of the da= tabase object representation of MetadataField. These classes cannot contain= any business logic.
@Entity @Table(name=3D"metadatafieldregistry", schema =3D "public") public class MetadataField { @Id @Column(name=3D"metadata_field_id") @GeneratedValue(strategy =3D GenerationType.AUTO ,generator=3D"metadata= fieldregistry_seq") @SequenceGenerator(name=3D"metadatafieldregistry_seq", sequenceName=3D"= metadatafieldregistry_seq", allocationSize =3D 1) private Integer id; @OneToOne(fetch =3D FetchType.LAZY) @JoinColumn(name =3D "metadata_schema_id",nullable =3D false) private MetadataSchema metadataSchema; @Column(name =3D "element", length =3D 64) private String element; @Column(name =3D "qualifier", length =3D 64) private String qualifier =3D null; @Column(name =3D "scope_note") @Lob private String scopeNote; protected MetadataField() { } /** * Get the metadata field id. * * @return metadata field id */ public int getFieldID() { return id; } /** * Get the element name. * * @return element name */ public String getElement() { return element; } /** * Set the element name. * * @param element new value for element */ public void setElement(String element) { this.element =3D element; } /** * Get the qualifier. * * @return qualifier */ public String getQualifier() { return qualifier; } /** * Set the qualifier. * * @param qualifier new value for qualifier */ public void setQualifier(String qualifier) { this.qualifier =3D qualifier; } /** * Get the scope note. * * @return scope note */ public String getScopeNote() { return scopeNote; } /** * Set the scope note. * * @param scopeNote new value for scope note */ public void setScopeNote(String scopeNote) { this.scopeNote =3D scopeNote; } /** * Get the schema . * * @return schema record */ public MetadataSchema getMetadataSchema() { return metadataSchema; } /** * Set the schema record key. * * @param metadataSchema new value for key */ public void setMetadataSchema(MetadataSchema metadataSchema) { this.metadataSchema =3D metadataSchema; } /** * Return <code>true</code> if <code>other</code&g= t; is the same MetadataField * as this object, <code>false</code> otherwise * * @param obj * object to compare to * * @return <code>true</code> if object passed in represents= the same * MetadataField as this object */ @Override public boolean equals(Object obj) { if (obj =3D=3D null) { return false; } if (getClass() !=3D obj.getClass()) { return false; } final MetadataField other =3D (MetadataField) obj; if (this.getFieldID() !=3D other.getFieldID()) { return false; } if (!getMetadataSchema().equals(other.getMetadataSchema())) { return false; } return true; } @Override public int hashCode() { int hash =3D 7; hash =3D 47 * hash + getFieldID(); hash =3D 47 * hash + getMetadataSchema().getSchemaID(); return hash; } public String toString(char separator) { if (qualifier =3D=3D null) { return getMetadataSchema().getName() + separator + element; } else { return getMetadataSchema().getName() + separator + element + se= parator + qualifier; } } @Override public String toString() { return toString('_'); } }
This example demonstrates the use of annotations to represent a database= table. With hardly any knowledge of hibernate you can get a quick grasp of= how the layout of the table will look, by just looking at the variables at= the top. This documentation will not expand on the annotations used, the h= ibernate documentation is far more suitable here: https://docs.jboss.org/hibernate/st= able/annotations/reference/en/html/entity.html.
Another improvement demonstrated above is the fact that linking of objec= ts can now be done by using annotations. Just take a look at the metadataSc= hema variable, this variable can be gotten by using the getter, the databas= e query to retrieve the schema will NOT be executed until we request the sc= hema using the getter. This makes linking of objects a lot easier since no = queries are required.
Each database entity class must be added to the hibernate.cfg.xml files,= these files can be found in the resources directory in the additions and t= est section of the dspace-api. an excerpt from the file is displayed below,= it displays how entities are configured.
<mapp= ing class=3D"org.dspace.content.Item"/> <mapping class=3D"org.dspace.content.MetadataField"/> <mapping class=3D"org.dspace.content.MetadataSchema"/> <mapping class=3D"org.dspace.content.MetadataValue"/> <mapping class=3D"org.dspace.content.Site"/>
The constructor of an entity can never be public, = this is to prevent the creation of an entity from the UI layers (which is s= omething we do not want). The service creates the entity, calls setters and= getters that are required and then calls upon the database access layer to= create the object.
The database entities are the only classes that are stateful. By default= all variables in an entity are linked to database columns, but sometimes t= he need arises to store other information in an object that we don=E2=80=99= t want to write to the database. An example of this would be to track if th= e metadata of an item has been modified, since if it has been modified and = we trigger an update we want to fire a =E2=80=9CMetadata_modified=E2=80=9D = event. But this =E2=80=9Cvariable=E2=80=9D would be useless inside a databa= se table. To accomplish this we can make use of the Transient annotation, just add @Transient abo= ve a variable and it can be used to contain a state of a certain variable w= hich will not be written to the database. Below is an example of how this l= ooks for the item=E2=80=99s metadataModified flag.
@Transi= ent private boolean metadataModified =3D false; boolean isMetadataModified() { return metadataModified; } void setMetadataModified(boolean metadataModified) { this.metadataModified =3D metadataModified; }
Each database object must have a single database access object, this obj= ect will be responsible for all Create/Read/Update/Delete calls made to the= database. The DAO will always be an interface this way the implementation = can change without ever having to modify the service class that makes use o= f this DAO.
Since each database object requires it's own DAO this will result in a l= ot of "duplicate" methods. A "GenericDAO" class was created with basic supp= ort for the CRUD methods; it is recommended for every DAO to extend from th= is interface. The current implementation of this interface is displayed bel= ow.
public = interface GenericDAO<T> { public T create(Context context, T t) throws SQLException; public void save(Context context, T t) throws SQLException; public void delete(Context context, T t) throws SQLException; public List<T> findAll(Context context, Class<T> clazz) thr= ows SQLException; public T findUnique(Context context, String query) throws SQLException; public T findByID(Context context, Class clazz, int id) throws SQLExcep= tion; public T findByID(Context context, Class clazz, UUID id) throws SQLExce= ption; public List<T> findMany(Context context, String query) throws SQL= Exception; }
The generics ensure that the DAO classes that are extending from this cl= ass cannot use these methods for other classes. Below is an example of the = interface for the metadataField table, it extends the GenericDAO class and = adds its own specific methods to the DAO.
public = interface MetadataFieldDAO extends GenericDAO<MetadataField> { public MetadataField find(Context context, int metadataFieldId, Metadat= aSchema metadataSchema, String element, String qualifier) throws SQLException; public MetadataField findByElement(Context context, MetadataSchema meta= dataSchema, String element, String qualifier) throws SQLException; public MetadataField findByElement(Context context, String metadataSche= ma, String element, String qualifier) throws SQLException; public List<MetadataField> findAllInSchema(Context context, Metad= ataSchema metadataSchema) throws SQLException; }
The GenericDAO is extended using the MetadataField type, this means that= the create, save, delete methods from the GenericDAO can only be used with= an instance of MetadataField.
It could very well be that a certain entity doesn=E2=80=99t requir= e any specialised methods, but we do require an implementation class for ea= ch DAO to get the generic methods.
Since a developer doesn't want to duplicate code to implement the "gener= ic" methods a helper class was created, this way the implementation of the = generic methods resides in one place. For the hibernate DAO implementa= tion the class is named AbstractHibernateDAO. It extends the Gene= ricDAO, implements all the generic methods and also comes with a few additi= onal helper methods. These additional helper methods are shortcuts so you d= on't have to write the same couple of lines of codes for each method. Some = examples: return a type casted list from a query, return an iterator from a= query, .... Below is the current implementation of the AbstractHibern= ateDAO (for reference only).
public = abstract class AbstractHibernateDAO<T> implements GenericDAO<T>= { @Override public T create(Context context, T t) throws SQLException { getHibernateSession(context).save(t); return t; } @Override public void save(Context context, T t) throws SQLException { getHibernateSession(context).save(t); } protected Session getHibernateSession(Context context) throws SQLExcept= ion { return ((Session) context.getDBConnection().getSession()); } @Override public void delete(Context context, T t) throws SQLException { getHibernateSession(context).delete(t); } @Override public List<T> findAll(Context context, Class<T> clazz) thr= ows SQLException { return list(createCriteria(context, clazz)); } @Override public T findUnique(Context context, String query) throws SQLException = { @SuppressWarnings("unchecked") T result =3D (T) createQuery(context, query).uniqueResult(); return result; } @Override public T findByID(Context context, Class clazz, UUID id) throws SQLExce= ption { @SuppressWarnings("unchecked") T result =3D (T) getHibernateSession(context).get(clazz, id); return result; } @Override public T findByID(Context context, Class clazz, int id) throws SQLExcep= tion { @SuppressWarnings("unchecked") T result =3D (T) getHibernateSession(context).get(clazz, id); return result; } @Override public List<T> findMany(Context context, String query) throws SQL= Exception { @SuppressWarnings("unchecked") List<T> result =3D (List<T>) createQuery(context, query= ).uniqueResult(); return result; } public Criteria createCriteria(Context context, Class<T> persiste= ntClass) throws SQLException { return getHibernateSession(context).createCriteria(persistentClass)= ; } public Criteria createCriteria(Context context, Class<T> persiste= ntClass, String alias) throws SQLException { return getHibernateSession(context).createCriteria(persistentClass,= alias); } public Query createQuery(Context context, String query) throws SQLExcep= tion { return getHibernateSession(context).createQuery(query); } public List<T> list(Criteria criteria) { @SuppressWarnings("unchecked") List<T> result =3D (List<T>) criteria.list(); return result; } public List<T> list(Query query) { @SuppressWarnings("unchecked") List<T> result =3D (List<T>) query.list(); return result; } public T uniqueResult(Criteria criteria) { @SuppressWarnings("unchecked") T result =3D (T) criteria.uniqueResult(); return result; } public T uniqueResult(Query query) { @SuppressWarnings("unchecked") T result =3D (T) query.uniqueResult(); return result; } public Iterator<T> iterate(Query query) { @SuppressWarnings("unchecked") Iterator<T> result =3D (Iterator<T>) query.iterate(); return result; } public int count(Criteria criteria) { return ((Long) criteria.setProjection(Projections.rowCount()).uniqu= eResult()).intValue(); } public int count(Query query) { return ((Long) query.uniqueResult()).intValue(); } public long countLong(Criteria criteria) { return (Long) criteria.setProjection(Projections.rowCount()).unique= Result(); } }
With our helper class in place creating & implementing the a metadat= aFieldDAO class just becomes a case of implementing the metadataField speci= fic methods. The MetadataFieldDAOImpl class was created, it extends the&nbs= p;AbstractHibernateDAO and implements the MetadataFieldDAO interface. = Below is the current implementation:
public = class MetadataFieldDAOImpl extends AbstractHibernateDAO<MetadataField>= ; implements MetadataFieldDAO { @Override public MetadataField find(Context context, int metadataFieldId, Metadat= aSchema metadataSchema, String element, String qualifier) throws SQLException{ Criteria criteria =3D createCriteria(context, MetadataField.class); criteria.add( Restrictions.and( Restrictions.not(Restrictions.eq("id", metadataFiel= dId)), Restrictions.eq("metadataSchema", metadataSchema), Restrictions.eq("element", element), Restrictions.eqOrIsNull("qualifier", qualifier) ) ); return uniqueResult(criteria); } @Override public MetadataField findByElement(Context context, MetadataSchema meta= dataSchema, String element, String qualifier) throws SQLException { Criteria criteria =3D createCriteria(context, MetadataField.class); criteria.add( Restrictions.and( Restrictions.eq("metadataSchema", metadataSchema), Restrictions.eq("element", element), Restrictions.eqOrIsNull("qualifier", qualifier) ) ); return uniqueResult(criteria); } @Override public MetadataField findByElement(Context context, String metadataSche= ma, String element, String qualifier) throws SQLException { Criteria criteria =3D createCriteria(context, MetadataField.class); criteria.createAlias("metadataSchema", "s"); criteria.add( Restrictions.and( Restrictions.eq("s.name", metadataSchema), Restrictions.eq("element", element), Restrictions.eqOrIsNull("qualifier", qualifier) ) ); return uniqueResult(criteria); } @Override public List<MetadataField> findAllInSchema(Context context, Metad= ataSchema metadataSchema) throws SQLException { // Get all the metadatafieldregistry rows Criteria criteria =3D createCriteria(context, MetadataField.class); criteria.add(Restrictions.eq("metadataSchema", metadataSchema)); return list(criteria); } }
The DAO implementations in DSpace make use of the criteria hibernate obj= ect to construct its queries. This makes for easy readable code, even with = a basic understanding of sql you can easily write queries. Read more about Criteria in the hib= ernate documentation.
Now that we have a DAO implementation we also need to configure it in sp= ring, this is done in the [dspace.dir]/config/spring/api/core-dao-serv= ices.xml file. It it mandatory to keep the DAO a singleton so the scope att= ribute of a bean must be absent (defaults to singleton) or set to singleton= . Below is the configuration of the MetadataFieldDAO implementation shown a= bove.
<bean= class=3D"org.dspace.content.dao.impl.MetadataFieldDAOImpl"/>
The service layer is where all our business logic will reside, each serv= ice will consist of an interface and an implementation class. Every statele= ss class in the DSpace api should be a service instead of a class consistin= g of static methods.
The service layer encompasses all business logic for our database object= s, for example we used to have Item.findAll(context), now we have itemServi= ce.findAll(context) (where itemService is an interface and & findAll is= not a static method). Each database object has exactly one service at= tached to it, which should contain all the business logic and calls upon a = DAO for its database access. Each of these services must have an imple= mentation configured in a spring configuration file.
Below is an example of the MetadataFieldService, it clearly specifies th= e =E2=80=9Cbusiness logic=E2=80=9D methods which one would expect. Includin= g create, find, update and delete.
public = interface MetadataFieldService { /** * Creates a new metadata field. * * @param context * DSpace context object * @throws IOException * @throws AuthorizeException * @throws SQLException * @throws NonUniqueMetadataException */ public MetadataField create(Context context, MetadataSchema metadataSch= ema, String element, String qualifier, String scopeNote) throws IOException, AuthorizeException, SQLException, NonUnique= MetadataException; /** * Find the field corresponding to the given numeric ID. The ID is * a database key internal to DSpace. * * @param context * context, in case we need to read it in from DB * @param id * the metadata field ID * @return the metadata field object * @throws SQLException */ public MetadataField find(Context context, int id) throws SQLException; /** * Retrieves the metadata field from the database. * * @param context dspace context * @param metadataSchema schema * @param element element name * @param qualifier qualifier (may be ANY or null) * @return recalled metadata field * @throws SQLException */ public MetadataField findByElement(Context context, MetadataSchema meta= dataSchema, String element, String qualifier) throws SQLException; /** * Retrieve all metadata field types from the registry * * @param context dspace context * @return an array of all the Dublin Core types * @throws SQLException */ public List<MetadataField> findAll(Context context) throws SQLExc= eption; /** * Return all metadata fields that are found in a given schema. * * @param context dspace context * @param metadataSchema the metadata schema for which we want all our = metadata fields * @return array of metadata fields * @throws SQLException */ public List<MetadataField> findAllInSchema(Context context, Metad= ataSchema metadataSchema) throws SQLException; /** * Update the metadata field in the database. * * @param context dspace context * @throws SQLException * @throws AuthorizeException * @throws NonUniqueMetadataException * @throws IOException */ public void update(Context context, MetadataField metadataField) throws SQLException, AuthorizeException, NonUniqueMetadataExcep= tion, IOException; /** * Delete the metadata field. * * @param context dspace context * @throws SQLException * @throws AuthorizeException */ public void delete(Context context, MetadataField metadataField) throws= SQLException, AuthorizeException; }
The DAO layer discussed above should be a completely internal layer, it = should never be exposed outside of the service layer. If a certain service = requires a DAO, the recommended way to make it available to the service imp= lementation would be to autowire it. Below is a excerpt from the implementa= tion class of the MetadataFieldService which shows how a service would be i= mplemented.
public = class MetadataFieldServiceImpl implements MetadataFieldService { /** log4j logger */ private static Logger log =3D Logger.getLogger(MetadataFieldServiceImpl= .class); @Autowired(required =3D true) protected MetadataFieldDAO metadataFieldDAO; @Autowired(required =3D true) protected AuthorizeService authorizeService; @Override public MetadataField create(Context context, MetadataSchema metadataSch= ema, String element, String qualifier, String scopeNote) throws IOException= , AuthorizeException, SQLException, NonUniqueMetadataException { // Check authorisation: Only admins may create DC types if (!authorizeService.isAdmin(context)) { throw new AuthorizeException( "Only administrators may modify the metadata registry")= ; } // Ensure the element and qualifier are unique within a given schem= a. if (hasElement(context, -1, metadataSchema, element, qualifier)) { throw new NonUniqueMetadataException("Please make " + element += "." + qualifier + " unique within schema #" + metadataSchem= a.getSchemaID()); } // Create a table row and update it with the values MetadataField metadataField =3D new MetadataField(); metadataField.setElement(element); metadataField.setQualifier(qualifier); metadataField.setScopeNote(scopeNote); metadataField.setMetadataSchema(metadataSchema); metadataField =3D metadataFieldDAO.create(context, metadataField); metadataFieldDAO.save(context, metadataField); log.info(LogManager.getHeader(context, "create_metadata_field", "metadata_field_id=3D" + metadataField.getFieldID())); return metadataField; } @Override public MetadataField find(Context context, int id) throws SQLException { return metadataFieldDAO.findByID(context, MetadataField.class, id); }
This service class has an auto wired MetadataFieldDAO available, this au=
to wired class should always link to an interface and never to an implement=
ation class. This way we can easily swap the DAO classes without having to =
touch the business logic.
When taking a closer look at the code it also becomes clear why we have th=
is 3 tier api now. For example the =E2=80=9Ccreate=E2=80=9D & =E2=80=9C=
delete=E2=80=9D methods check authorizations (with an auto wired authorizat=
ion service) but they leave the actual creation/deletion to the DAO impleme=
ntation.
What is also important to note for services is that these services are a= ll =E2=80=9Csingletons=E2=80=9D, only one instance of each service exist in= the memory. The changes to the objects are handled by the "data objects=E2= =80=9D.
When working with interfaces and their implementation it is also importa= nt to make all internal service methods protected instead of private. This = makes it easier to extend an existing implementation for making local imple= mentation, since extending classes cannot use a private method.
On top of this, all of the =E2=80=9Cstatic method Managers=E2=80=9D are =
also replaced by services, so the AuthorizeManager is now AuthorizeService,=
BitstreamStorageManager is now BitstreamStorageService and so on.
These services do not make use of a DAO, if a certain service is required =
to make changes to a certain database object (for example the BitstreamStor=
eService will want access to the Bitstream) then we autowire that service i=
nto the BitstreamStorageService and use the available methods of the Bitstr=
eamService.
A great example of an issue that has greatly benefitted from this change= is the workflow. Before the service the following part of code was common = to determine which workflow to use:
if(Conf= igurationManager.getProperty("workflow", "workflow.framework").equals("xmlw= orkflow=E2=80=9D)) { XmlWorkflowManager.start(c, wi) }else{ WorkflowManager.start(c, wi); }
This way of working can be greatly simplified now, see below for the new= code.
The old WorkflowManager code has now been moved to BasicWorkflowMa= nager, this makes it easier to identify the workflows. It allows us to crea= te a WorkflowService interface from which both the BasicWorkflowService &am= p; XmlWorkflowService inherit.
Workflo= wServiceFactory.getInstance().getWorkflowService().start(context, workspace= Item);
Which is much a cleaner way and really shows of the benefits of working = with services instead of static managers, replacing a service just becomes = changing a certain class link in a spring file !
When using a service you need to request the bean for it. As a consequen= ce, when you require a service inside another bean you can easily autowire = it in. However, using services in non-Bean classes would potentially requir= e a lot of code duplication, as demonstrated in the example below:
new DSp= ace().getServiceManager().getServiceByName(=E2=80=9C???????=E2=80=9D, Metad= ataFieldService.class)
This forces the user to remember and then look up a service by name, whi= ch isn=E2=80=99t quite that easy to use. To make the services easier to use= , each package in DSpace that contains services comes with its own factory.= As an example, you can retrieve the MetadataFieldService in the content pa= ckage by calling:
Content= ServiceFactory.getInstance().getMetadataFieldService()
This way, all you need to need to remember to get a certain service is t= he following, the factory class will have the following format: {package}Se= rviceFactory. Each of these factories comes with a static getInstance() met= hod. The factory classes are split into an abstract class which has the get= Instance() method and abstract methods for all the service getters. Below i= s an example of the AuthorizeServiceFactory class:
public = abstract class AuthorizeServiceFactory { public abstract AuthorizeService getAuthorizeService(); public abstract ResourcePolicyService getResourcePolicyService(); public static AuthorizeServiceFactory getInstance() { return new DSpace().getServiceManager().getServiceByName("authorize= ServiceFactory", AuthorizeServiceFactory.class); } }
public = class AuthorizeServiceFactoryImpl extends AuthorizeServiceFactory { @Autowired(required =3D true) private AuthorizeService authorizeService; @Autowired(required =3D true) private ResourcePolicyService resourcePolicyService; @Override public AuthorizeService getAuthorizeService() { return authorizeService; } @Override public ResourcePolicyService getResourcePolicyService() { return resourcePolicyService; } }
The introduction of Hibernate lead to following changes to the DSpace do= main model.
Hibernate does not support the way how DSpace currently relies on "type"= and "id" as a compound identifier, used to link tables together. Hibernate= requires a single identifier to be used to link objects like metadata, han= dles, resource policies, .... to a single DSpaceObject implementation. To s= upport this behavior a new table "DSpaceObject" was created with only a sin= gle column a UUID. All objects inheriting from DSpaceObject such as Co= mmunity, Collection, Item, ... no longer have their own identifier column b= ut link to the one used in DSpace Object. This has several advantages:
The old integer based identifiers will still be available as a read only= property, but these are not updated for new rows and should never be used = for linking.
The old identifiers can easily be retrieved by using the getLegacyI= d() getter. Additionally all current DSpace objects services have a method = termed findByIdOrLegacyId(Context context, String id) so if a certain part = of the code doesn't know which type of identifier is used a developer can s= till retrieve the object it belongs to (without having to duplicate code to= check if an identifier is a UUID or Integer). These methods are used (amon= g others) by the AIP export to ensure backwards compatibility for exported = items. Below are some code examples of how you can still use the legacy ide= ntifier.
//Find = an eperson by using the legacy identifier ePersonService.findByLegacyId(context, oldEPersonId); //Find by old legacy identifier OR new identifier (can be used for backward= s compatibility ePersonService.findByIdOrLegacyId(context, id);
Since the DSpaceObject class now represents a database class, each class= extending from this object must also be a database object. Therefore, the = "Site" object, representing your instance of DSpace, also becomes a table i= n the database. The site will automatically be created when one is absent a= nd the handle with suffix "0" will be automatically assigned to it. An addi= tional benefit of having site has an object is that we can assign metadata = to the site object. This is not yet supported in the code, but the possibil= ities are there.
Below is a code example of how you can retrieve the site object:
//Find = the site siteService.findSite(context);
Since there is no longer a possibility to use the old static DSpace= Object methods like find(context, type, id), getAdminObject(action), .... a= generic DSpaceObjectService can be retrieved by passing along a type. This= service can then be used to perform the old static methods, below are some= code examples:
//Retri= eve a DSpace object service by using a DSpaceObject=20 DSpaceObjectService dspaceObjectService =3D ContentServiceFactory.getInstan= ce().getDSpaceObjectService(dso); //Retrieve a DSpace object service by using a type DSpaceObjectService dspaceObjectService =3D ContentServiceFactory.getInstan= ce().getDSpaceObjectService(type); //Retrieve an object by identifier dspaceObjectService.find(context, uuid); //Retrieve the parent object dspaceObjectService.getParentObject(context, dso); //One line replacement for DSpaceObject.find(context, type, id); ContentServiceFactory.getInstance().getDSpaceObjectService(type).find(conte= xt, uuid);
In the current DSpace API, a collection can only be created by calling "= createCollection()" on a community object, because the actual create method= in the collection class could only be accessed from inside the package. Th= e reasoning behind this implementation was to prevent a developer from crea= ting a collection without a community. This behaviour was no longer possibl= e with the service based API since an interface can only have public method= s. As a result, the "creation" of a certain DSpaceObject has been moved to = the service of that DSpaceObject. Therefore, creating a collection no longe= r requires you to call on a community but to use the collectionService.crea= te(context, community) method. The parameters of this method ensure that a = community is still required to create a collection. Below are some more cod= e examples:
//Creat= e a collection collectionService.create(context, community); //Create a bundle bundleService.create(context, item, name); //Create an item, a check is made inside the service to ensure that a works= pace item can only be linked to a single item. itemService.create(context, workspaceItem);
Metadatum value class removed
The Metadatum class has been removed and is replaced by the MetadataValu= e class. The Metadatum class used to be a value representation which was de= tached from its original MetadataValue object, this was the only class in D= Space which works in this manner and it has been removed. The MetadataValue= class is linked to the metadatavalue table and works like all other linked= database objects, the change was important to support the lazy loading of = metadata values, these will only be loaded once we request them.
Developers should keep in mind that when adding, clearing, modifying met= adata that MetadataValue rows will be created. Since these values are linke= d to a certain metadata field it is important to keep in mind that a c= ertain metadata field needs to be present prior to adding it to a DSpace ob= ject.
Although the ItemIterator class has been removed, the functionality to i= terate over a list of items still remains, but instead of using a custom It= erator class the service will now return an "Iterator<Item>". The que= rying is done similar to the old ways, an initial query will retrieve only = the identifiers and each "next()" call will query the item table to retriev= e the next item from the iterator.
In the old DSpace API each time a new context object was created using i= ts constructor, a new database connection was initialized. Hibernate uses a= different principle, it shares a single database connection over an entire= thread, so no matter how many new Context() calls you h= ave in a single thread, only one database connection will remain open.
Some of the older DSpace features could no longer be supported in hibern= ate, for example the database based browse system has been completely remov= ed from the codebase. Another feature that will be removed is the database = based OAI approach since this is not supported in hibernate.
DSpace supports two workflow systems: the "original" workflow system kno= wn as workflow and the configurable XML workflow. To avoid confusion, the o= ld WorkflowManager & WorkflowItem class have been renamed to BasicWorkf= lowService & BasicWorkflowItem. Below is a schematic overview of the wo= rkflow structure in the new service layout.
When a developer wants to use the workflow from a section in the code th= at isn't workflow specific there is no more need to add code for both workf= lows. The general workflowService can be used alongside with the generic Wo= rkflowItem, the actual implementation of these classes depends purely on co= nfiguration, below are some code exerts that demonstrate this behaviour.
//Start= ing a workflow using a workspace item WorkflowItem workflowItem; if (useWorkflowSendEmail) { workflowItem =3D workflowService.start(context, workspaceItem); } else { workflowItem =3D workflowService.startWithoutNotify(context, workspaceI= tem); }
Not displayed on the graph above is the fact that both BasicWorkflowItem= & XmlWorkflowItem each have a service that contains the business logic= methods for these items, a schematic representation of the WorkflowItemSer= vice view displayed below.
By using the additional WorkflowItemService interface on top of our 2 wo= rkflow system the developers can interact with workflow items without even = knowing the actual implementation, some code examples can be found below.= p>
//Find = all workflow items List<WorkflowItem> workflowItems =3D workflowItemService.findAll(cont= ext); //Find workflowItem by item & delete the workflowItem WorkflowItem workflowItem =3D workflowItemService.findByItem(context, item)= ; workflowItemService.delete(context, workflowItem);
As displayed above this way a developer can delete items without ever kn= owing which workflow service is use. If the workflow implementation were to= change it wouldn't matter to the code that is using the service.
By splitting our API in 3 layers we now have a clear separation of respo= nsibilities. The service layer contains the business logic, the database ac= cess layer handles all database queries and the data objects represent the = database tables in a clear way. Refactoring changes becomes as easy as crea= ting a new class an extending from an existing one, instead of overwriting = a single class containing thousands of lines of code to make a small adjust= ment.
As discussed above by having a single database access object linked to a= single service there is only way to access a certain table and that is thr= ough a server, below is schematic representation of the before and after.= p>
Hibernate will abstract the database queries away from the developer, fo= r simple queries you can use the criteria queries (see for example: http://www.tutorialspoint.co= m/hibernate/hibernate_criteria_queries.htm for a quick tutorial). = For more complex queries a developer can use the hibernate query language (= HQL) (quick tutorial: h= ttp://www.tutorialspoint.com/hibernate/hibernate_query_language.htm), a= lthough it might take some time to get used to it, there are no longer 2 da= tabases to test against (or dirty hacks to perform to get the same query to= work for both postgres & oracle)
The alternate workflow example has been discussed at length in one of th= e previous chapters, see "Workflow system refactoring" chapter.
By creating a new class in the additions module and then extending an ex= isting class, methods can easily be overridden/adjusted without ever having= to overwrite/alter the original class. Example of how to tackle this can b= e found in the tutorial sections below.
The entire DSpace context caching mechanism has been removed. This means= the context class will no longer be responsible for caching certain object= s. This responsibility has been entirely delegated to the hibernate framewo= rk. Hibernate allows for caching to work on a class level, so for each freq= uently used object a developer can easily configure it to be cached, which = is much more flexible then the old DSpace way. For more information about h= ibernate caching please consult the hibernate documentation: http://www.tutorialspoint.com/hibernate/hib= ernate_caching.htm.
All DSpace modules have been refactored. They all compile, and all exist= ing unit/integration tests pass.
Additional manual testing of individual interfaces (XMLUI, JSPUI, REST, = SWORDv1 and v2, RDF) is necessary to ensure that all features still functio= n properly.
As mentioned, all unit/integration tests pass. However, DSpace does not = have full test coverage (in fact most tests reside in the API itself). = ; Some basic testing of interfaces has already been performed, but more wil= l be necessary prior to 6.0.
The initial refactoring of the dspace-api is just a first step in a long= er process, some areas that I believe could still use some improvement:
TODO: Create tutorial
TODO: Create tutorial
TODO: Create tutorial
In this tutorial we will be adding the possibility to add a bookmark of =
any given item to an existing eperson
This would lead to the functionality to be able to return to important ite=
ms without having to remember where to find it, or having to bookmark in th=
e browser itself (because this would differ from person to person, so if a =
computer switch takes place for some reason, those bookmarks would be lost)=
Bookmarks linked to a specific EPerson also open a whole range of possib= ilities, such as, adding a bookmark for all users in a certain group, check= ing for "more" important items based on the amount of bookmarks it correlat= es to, etc.
To be able to save bookmarks, a new database table is required, and for = this example, we added the following db and its fields. (in a flyway file, = more on that after the actual creation of the table)
CREATE T= ABLE bookmark ( bookmark_id INTEGER PRIMARY KEY not null, title VARCHAR(50) , date_created DATE, creator UUID references EPERSON(uuid) not null , item UUID references ITEM (uuid) not null );
To make sure the database creation is always present (for example, if it= is simply created through command line, not all developers on that project= s would be certain they have the "latest" database), we can create a file w= ith our database creation.
By creating this certain type of file in the correct place in the projec= t, we can make certain that upon building the code, there is a check (and p= ossible update) of the database to make sure all the required changes have = taken place.
Creation of a flyway step can be done in the following way.
In the directory [dspace.src]/dspace/modules/additions/src/main/java/resou=
rces/org/dspace/storage/rdmbs/sqlmigration we need to create (depending on =
what type of database we use) another directory with files, but for common =
usages and exchangeability it is recommended that for other types of databa=
se this file is also created. This way the code remains usable in the same =
way with different backend databases.
Because of the database type, another directory (postgres) needs to be c= reated so that flyway knows what database it is dealing with and where to c= heck. If you were to do this for another db, the name would differ (oracle = for example).
In this directory we will create the file that contains our previously m=
entioned sql command to create the database (The file can contain as many s=
ql commands as you like, separated by a semicolon)
This file will need to conform to some naming rules.
During the ant update or fresh_install, flyway will check if the databas= e has to be created or updated and will do so accordingly, this way the cor= rect db instance is always used.
A Database Object, is an object that represents the database table in a =
java class. The annotations in this class should all link to database colum=
ns from the table created above.
Below is the implementation of the current Bookmark class.
package= org.dspace.content; import org.dspace.eperson.EPerson; import javax.persistence.*; import java.util.Date; @Entity @Table(name=3D"bookmark") public class Bookmark { @Id @GeneratedValue(strategy =3D GenerationType.SEQUENCE ,generator=3D"book= mark_seq") @SequenceGenerator(name=3D"bookmark_seq", sequenceName=3D"bookmark_seq"= , allocationSize =3D 1) @Column(name =3D "bookmark_id", unique =3D true, nullable =3D false, in= sertable =3D true, updatable =3D false) private int id; @Column(name=3D"title") private String title; @Column(name=3D"date_created") @Temporal(TemporalType.DATE) private Date dateCreated; @ManyToOne(fetch =3D FetchType.EAGER) @JoinColumn(name =3D "creator") private EPerson creator; @OneToOne(fetch =3D FetchType.EAGER) @JoinColumn(name =3D "item") private Item item; protected Bookmark(){ } public int getId() { return id; } public void setId(int id) { this.id =3D id; } public String getTitle() { return title; } public void setTitle(String title) { this.title =3D title; } public Date getDateCreated() { return dateCreated; } public void setDateCreated(Date dateCreated) { this.dateCreated =3D dateCreated; } public EPerson getCreator() { return creator; } public void setCreator(EPerson creator) { this.creator =3D creator; } public Item getItem() { return item; } public void setItem(Item item) { this.item =3D item; } }
As we see here, apart from the hibernate annotations, the class consists= entirely of a constructor class (which is package protected because only t= he ServiceImplementation should be allowed to generate this object), and ce= rtain setters and getters.
If we take the annotations into account, we can see the database table w= e created in the previous step.
@Entity @Table(name=3D"bookmark")
Annotates that this class is an entity bean, so it must have a no-argume=
nt constructor that is visible with at least protected scope.
The table annotation marks this class as the table with the name "bookmark=
".
Id and its annotations can be a bit harder to understand as there are so= me more "configurations" available for this
@Id @GeneratedValue(strategy =3D GenerationType.SEQUENCE ,generator=3D"bookmark= _seq") @SequenceGenerator(name=3D"bookmark_seq", sequenceName=3D"bookmark_seq", al= locationSize =3D 1) @Column(name =3D "bookmark_id", unique =3D true, nullable =3D false, insert= able =3D true, updatable =3D false) private int id;
@SequenceGenerator further specifies the generator (here a sequence = will be used based on the id)
The last, and probably most important part of the hibernate annotations = are the following
@ManyTo= One(fetch =3D FetchType.EAGER) @OneToOne(fetch =3D FetchType.EAGER)
These all relate to the number of relations the column can have, such as= , a single field can be referenced multiple times or perhaps only once, etc= . Depending on the requirements, different relations are advised.
For more information on these annotations check the hibernate & the = javax documentation.
The service part of the structure acts as a gate to use for all business=
logic actions that can be done on this object.
All functionality should go through here (such as create, read, update, de=
lete and other relevant methods).
One of the key features of the service api is the usage of interfaces and =
their implementation to be able to simply add another implementation withou=
t having to alter the original one. The only thing to note here is that the=
BookmarkService is in the service package, while BookmarkServiceImpl as we=
ll as the Bookmark Class itself is in the content package).
We start of by creating an interface which will receive it's implementat= ion later on in this document.
package= org.dspace.content.bookmark.service; import org.dspace.content.bookmark.Bookmark; import org.dspace.content.Item; import org.dspace.core.Context; import org.dspace.eperson.EPerson; import java.sql.SQLException; import java.util.List; public interface BookmarkService { public Bookmark create(Context context) throws SQLException; public Bookmark read(Context context, int id) throws SQLException; public void update(Context context, Bookmark bookmark) throws SQLExcept= ion; public void delete(Context context, Bookmark bookmark) throws SQLExcept= ion; public List<Bookmark> findAll(Context context) throws SQLExceptio= n; public List<Bookmark> findByEperson(Context context, EPerson ePer= son) throws SQLException; public List<Bookmark> findByItem(Context context, Item item) thro= ws SQLException; }
Since the service interface implementation cannot be responsible for the= database queries themselves we need to create an additional interface that= will be a gateway to our database access. An interface for a certain = DAO could look something like this.
package= org.dspace.content.dao; import org.dspace.content.Bookmark; import org.dspace.content.Item; import org.dspace.core.Context; import org.dspace.core.GenericDAO; import org.dspace.eperson.EPerson; import java.sql.SQLException; import java.util.List; public interface BookmarkDAO extends GenericDAO<Bookmark> { public List<Bookmark> findBookmarksByEPerson(Context context, EPe= rson ep) throws SQLException; public List<Bookmark> findBookmarksByItem(Context context, Item i= tem) throws SQLException; }
Notice that this class extends the GenericDAO class with a Bookmark. Thi= s GenericDAO already has most of the standard functionality a class might n= eed for database access, such as a create, save, delete, findyId, findAll, = etc. This allows us to use these implementations, and focus on our addition= al methods. The service doesn't need to use all the actions of the GenericD= AO but it ensures that this DAO doesn't need to declare them.
Factories ensure that we don't have to keep track of our service names &= amp; disregard the need to always use new DSpace().getServiceManager().getS= erviceByName() method. Creating a new factory is done in 3 steps, all of wh= ich are briefly explained below.
package= org.dspace.content.bookmark.factory; import org.dspace.content.bookmark.service.BookmarkService; import org.dspace.utils.DSpace; /** * Abstract factory to get services for the content.bookmark package, use B= ookmarkServiceFactory.getInstance() to retrieve an implementation * */ public abstract class BookmarkServiceFactory { public abstract BookmarkService getBookmarkService(); public static BookmarkServiceFactory getInstance(){ return new DSpace().getServiceManager().getServiceByName("bookmarkS= erviceFactory", BookmarkServiceFactory.class); } }
A service factory only has one mandatory method, the "getInstance()" whi= ch returns an instantiated factory that can then be used. The other methods= in the factory should be abstract, our factory implementation will provide= the necessary details.
package= org.dspace.content.bookmark.factory; import org.dspace.content.bookmark.service.BookmarkService; import org.springframework.beans.factory.annotation.Autowired; public class BookmarkServiceFactoryImpl extends BookmarkServiceFactory { @Autowired(required =3D true) private BookmarkService bookmarkService; @Override public BookmarkService getBookmarkService() { return bookmarkService; } }
A factory implementation will come down to auto wiring the service &= implementing the getter, should additional business logic be required to d= etermine which service should be used when, this class is the place to do i= t.
In step 1 we use a getInstance() to retrieve a service factory, in order= to use it we still need to configure it. It is recommended to place the fa= ctory configuration in the [dspace.dir]/config/spring/api/core-factory= -services.xml file. This would result in the following configuration:
<bea= n id=3D"bookmarkServiceFactory" class=3D"org.dspace.content.bookmark.factor= y.BookmarkServiceFactoryImpl"/>
Once this is done the following code can be used to retrieve an instance= of our service:
Content= ServiceFactory.getInstance().getBookmarkService();
The Service implementation lets us fill in the required behavior and bus= iness logic we want to have. Below is an example implementation of our Book= markService.
package= org.dspace.content.bookmark; import org.dspace.content.Item; import org.dspace.content.bookmark.dao.BookmarkDAO; import org.dspace.content.bookmark.service.BookmarkService; import org.dspace.core.Context; import org.dspace.eperson.EPerson; import org.springframework.beans.factory.annotation.Autowired; import java.sql.SQLException; import java.util.List; public class BookmarkServiceImpl implements BookmarkService { @Autowired(required =3D true) protected BookmarkDAO bookmarkDAO; protected BookmarkServiceImpl() { } @Override public Bookmark create(Context context) throws SQLException { return bookmarkDAO.create(context, new Bookmark()); } @Override public Bookmark read(Context context, int id) throws SQLException { return bookmarkDAO.findByID(context, Bookmark.class, id); } @Override public void update(Context context, Bookmark bookmark) throws SQLException= { if(context.getCurrentUser().equals(bookmark.getCreator())){ bookmarkDAO.save(context, bookmark); } } @Override public void delete(Context context, Bookmark bookmark) throws SQLException= { if(context.getCurrentUser().equals(bookmark.getCreator())) { bookmarkDAO.delete(context, bookmark); } } @Override public List<Bookmark> findAll(Context context) throws SQLException { return bookmarkDAO.findAll(context, Bookmark.class); } @Override public List<Bookmark> findByEperson(Context context, EPerson ePer= son) throws SQLException { return bookmarkDAO.findBookmarksByEPerson(context,ePerson); } @Override public List<Bookmark> findByItem(Context context, Item item) thro= ws SQLException { return bookmarkDAO.findBookmarksByItem(context,item); } }
As seen in the example above, the Implementation of the BookmarkService =
simply links the functionality "deeper" in the structure.
In this particular class, we can see that the BookmarkDAO class is used to=
create, read, update, delete, etc the Bookmark object.
Also note that this is the ONLY place where a new Bookmark Object will be =
created, this being that the Bookmark class constructor is package protecte=
d and in the same package as this class.
One thing that has to be extremely clear is that the DAO is ONLY the link =
between the database and the code, NO business logic should be present ther=
e.
So for example, if only the creator of a bookmark is allowed to delete/upd=
ate it, this should be handled in the service as it's possible that another=
service doesn't require this.
<bean= class=3D"org.dspace.content.BookmarkServiceImpl"/>
The DAO implementation lets us fill in the database queries that we requ= ire. Below is an example implementation of our BookmarkDAO.
package= org.dspace.content.bookmark.dao.impl; import org.dspace.content.bookmark.Bookmark; import org.dspace.content.Item; import org.dspace.content.bookmark.dao.BookmarkDAO; import org.dspace.core.AbstractHibernateDAO; import org.dspace.core.Context; import org.dspace.eperson.EPerson; import org.hibernate.Criteria; import org.hibernate.Query; import org.hibernate.criterion.Restrictions; import java.sql.SQLException; import java.util.List; public class BookmarkDAOImpl extends AbstractHibernateDAO<Bookmark> i= mplements BookmarkDAO { @Override public List<Bookmark> findBookmarksByEPerson(Context context, EPe= rson ep) throws SQLException { Criteria criteria =3D createCriteria(context,Bookmark.class); criteria.add(Restrictions.and(Restrictions.eq("creator",ep))); return list(criteria); } @Override public List<Bookmark> findBookmarksByItem(Context context, Item i= tem) throws SQLException { Query query =3D createQuery(context, "from Bookmark where item =3D = :item order by date_created"); query.setParameter("item", item); return list(query); } }
In the example above 2 different types of queries are used, the criteria= & the HQL query. This documentation doesn't go into the details of how= these work, for more information check the hibernate docs. What is differe= nt from the hibernate defaults are the "list", "createQuery" & "createC= riteria" methods, these are helper methods provided by the AbstractHib= ernateDAO class. These methods are present to prevent code duplication, so = checkout the class to get a list of all available helper methods.
<bean= class=3D"org.dspace.content.BookmarkServiceImpl"/>
Imagine the possibility to create bookmarks based on a person's mail, ha= ndles of items and given titles. Below is an example class that can be run = from the command line interface that will create & display bookmarks us= ing the service we created above.
package= org.dspace.util; import org.apache.commons.cli.*; import org.dspace.content.Bookmark; import org.dspace.content.Item; import org.dspace.content.factory.ContentServiceFactory; import org.dspace.content.service.BookmarkService; import org.dspace.content.service.ItemService; import org.dspace.core.Context; import org.dspace.eperson.EPerson; import org.dspace.eperson.factory.EPersonServiceFactory; import org.dspace.eperson.service.EPersonService; import org.dspace.handle.factory.HandleServiceFactory; import org.dspace.handle.service.HandleService; import java.sql.SQLException; import java.util.*; public class BookmarkUpdater { protected BookmarkService bookmarkService; protected EPersonService ePersonService; protected ItemService itemService; protected HandleService handleService; protected BookmarkUpdater() { bookmarkService =3D ContentServiceFactory.getInstance().getBookmark= Service(); ePersonService =3D EPersonServiceFactory.getInstance().getEPersonSe= rvice(); itemService =3D ContentServiceFactory.getInstance().getItemService(= ); handleService =3D HandleServiceFactory.getInstance().getHandleServi= ce(); } public void doUpdate(String epersonMail, Map<String, String> hand= lesAndTitles) throws SQLException { Context ctx =3D new Context(); EPerson ePerson =3D ePersonService.findByEmail(ctx, epersonMail); for (String key : handlesAndTitles.keySet()) { // Don't add invalid objects if (handleService.resolveToObject(ctx, key) !=3D null) { createBookmark(ctx, ePerson, key, handlesAndTitles.get(key)= ); } } ctx.complete(); } private void createBookmark(Context ctx, EPerson ePerson, String handle= , String title) throws SQLException { Bookmark bookmark =3D bookmarkService.create(ctx); bookmark.setDateCreated(new Date()); bookmark.setCreator(ePerson); bookmark.setItem((Item) handleService.resolveToObject(ctx, handle))= ; bookmark.setTitle(title); } public static void main(String... args) throws ParseException { BookmarkUpdater bmu =3D new BookmarkUpdater(); CommandLineParser parser =3D new PosixParser(); Map<String, String> handlesAndTitles =3D new HashMap<>(= ); Options options =3D createOptions(); CommandLine line =3D parser.parse(options, args); printOptionsHelp(options, line); try { if (line.hasOption("f")) { if (line.hasOption("i")) { bmu.printBookmarksBasedOnItem(line.getOptionValue("i"))= ; } } String epersonMail =3D line.getOptionValue("e"); addHandlesAndTitles(handlesAndTitles, line); bmu.doUpdate(epersonMail, handlesAndTitles); if (line.hasOption("p")) { bmu.printEpersonsBookmarks(epersonMail); } } catch (SQLException sqle) { System.err.println(sqle.getLocalizedMessage()); sqle.printStackTrace(); } } private static void printOptionsHelp(Options options, CommandLine line)= { if (line.hasOption('h')) { HelpFormatter myhelp =3D new HelpFormatter(); myhelp.printHelp("Usages : \n", options); System.out.println("\nAdd a single bookmark based on eperson,ha= ndle and possibly a title: org.dspace.util.BookmarkUpdater -e atmirenv@gmai= l.com -h 123456789/4 -t 'A title'"); System.out.println("\nAdd multiple bookmarks to a provided eper= son: org.dspace.util.BookmarkUpdater -e atmirenv@gmail.com -m"); System.out.println("\nAdding the -p option will show all the bo= okmarks currently associated with the given eperson"); System.out.println("\nIf the f option has been provided (as wel= l as a handle (i), all bookmarks with this given item will be shown"); System.out.println("\nIf no options are provided (apart from th= e required eperson), a fallback to the addition of multiple bookmarks will = be used"); System.exit(0); } } private static void addHandlesAndTitles(Map<String, String> handl= esAndTitles, CommandLine line) { String title; // If the "multiple" option has been enabled, keep asking user for = input until he types stop // When no handle is supplied, default to this behaviour as well if (line.hasOption("m") || !line.hasOption("i")) { System.out.println("Enter valid handles(invalid handles will be= skipped)\nYou can stop adding bookmarks by typing stop"); Scanner scanner =3D new Scanner(System.in); String handle =3D scanner.nextLine(); while (!handle.equals("stop")) { if (handlesAndTitles.containsKey(handle)) { System.out.println("This item has already been bookmark= ed by this user"); } else { Scanner titleScanner =3D new Scanner(System.in); System.out.println("Enter a title for this bookmark"); title =3D titleScanner.nextLine(); handlesAndTitles.put(handle, title); } System.out.println("Enter another handle"); handle =3D scanner.nextLine(); } } else { // The user has provided a single handle to bookmark if (line.hasOption("i")) { if (line.hasOption("t")) { title =3D line.getOptionValue("t"); } else { title =3D "No title provided for the bookmark"; } handlesAndTitles.put(line.getOptionValue("i"), title); } } } private static Options createOptions() { Options options =3D new Options(); Option epers =3D new Option("e", "eperson", true, "The eperson's em= ail adress"); epers.setRequired(true); options.addOption(epers); options.addOption("i", "itemhandle", true, "The handle of an item t= o add"); options.addOption("m", "multiple", false, "Create multiple bookmark= s"); options.addOption("t", "title", true, "Enter a title"); options.addOption("h", "help", false, "help"); options.addOption("p", "print", false, "Print the bookmarks current= ly connected to a given eperson"); options.addOption("f", "findbyitem", false, "Print the bookmarks cu= rrently connected to a given itemhandle"); return options; } public void printEpersonsBookmarks(String epersonMail) throws SQLExcept= ion { Context ctx =3D new Context(); EPerson ePerson =3D ePersonService.findByEmail(ctx, epersonMail); List<Bookmark> bookMarksByEperson =3D bookmarkService.findByE= person(ctx, ePerson); printBookmarks(bookMarksByEperson); ctx.complete(); } public void printBookmarksBasedOnItem(String handle) throws SQLExceptio= n { Context ctx =3D new Context(); Item item =3D (Item) handleService.resolveToObject(ctx, handle); if (item !=3D null) { List<Bookmark> bookMarksByEperson =3D bookmarkService.fin= dByItem(ctx, item); printBookmarks(bookMarksByEperson); } ctx.complete(); } private void printBookmarks(List<Bookmark> bookMarksByEperson) { for (Bookmark b : bookMarksByEperson) { System.out.println("Generated UUI :" + b.getId()); System.out.println("Title :" + b.getTitle()); System.out.println("Date of creation :" + b.getDateCreated()); System.out.println("Creator : " + b.getCreator().getFullName())= ; System.out.println("Item : " + ((b.getItem() !=3D null) ? b.get= Item().getName() : "No item provided for this bookmark")); } } }