Page tree
Skip to end of metadata
Go to start of metadata

The Fedora HTTP API is partitioned into a core and optional modules. Optional modules are grouped in logical packages by use.

The core module comprises LDP with the Fedora 4 core ontology.

  • LDP defines the HTTP behavior of RDF resources and non-RDF resources; the syntax of the core API.
  • The ontology gives the meaning of the RDF that may be transacted via LDP; the semantics of the core API.
    • Other ontologies may be in play in a given repository, but that is module- or instance-specific behavior, not part of the Fedora core API specification.
    • The same division between syntax and semantics will be observed through the API module specifications, not just in the core.
  • It is an open question whether the API for non-RDF resources defined by LDP is sufficient to specify the behavior of a Fedora repository, or whether we will need to provide additional specification that is compatible with LDP but extends it. Currently, we do extend the non-RDF resource behavior of LDP in ways described below.

Optional suites might include:

Some optional suites will feature their own ontologies, which will describe the RDF that they make available to transact across LDP as extensions to the upper ontology. Some optional suites may also define an accompanying Java SPI that will define types and semantics for a pluggable implementation of that suite's functionality. For example, the Backup/Restore API should be accompanied by an SPI that includes the types that define backup/export formats, and extension mechanics to add to them.

An obvious pair of questions: should any of these APIs be folded into the baseline? Are there others not yet listed here?

  • No labels

5 Comments

  1. I really don't think we need all of these...some are not orthogonal and others would be implementation dependent or very complex to standardise.,

    Transactions and locking could probably be combined.

    Sitemap is just an indexing variant - but what is exposed could depend strongly on possible access controls.

    Identifier minting, fixity generation and versioning are all CU triggered object updates. These can be interdependent - some policies may specify a per version identifier (for certain definitions of version). Fixity may or may not checksum all datastreams and may include digital signing.

    Backup and fixity checking are just iterated functions over defined sets.

    1. Neil Jefferies: I'm not sure you understand the intention. There is no desire to combine any two APIs that can be separated here-- we're going for maximum independence, with the expectation that most people will not implement most of these APIs. So yes, transactions and locking could be combined, but that would miss the point. Additionally, I'm trying to partition the API as it exists, not as we might like it to evolve. It's not clear to me that we have the time to wait to do that.

      Sitemap is definitely not an indexing variation. There is a significant amount of code in the current HTTP API for just this purpose. If you want to argue that it shouldn't be there, that's fine, but that's a different argument. I would be happy to see it go, but it is there now.

      I'm not at all clear how you get to the idea that identifier minting, fixity management and versioning are merely updates. To my mind they are clearly not. They each possess distinct APIs now. For example: you can retrieve a specific version of a resource. You can mint an identifier without using it to create an object. How are these things based on triggered updates?

      Backup is definitely not currently a function iterated over defined sets (not to mention that we have no way to define sets). Again, if you want to argue that it should be, that's something else. If you'd like to change the API before partitioning, that's fine by me, but that's a topic to bring up at the next TWG call, I think.

       

      1. Fair enough - it does make sense to work with the API's as they are now. I don't want to change them at this stage!  I do see a risk with too many independent API's in that we end up with a multiplicity of different implementation sets so maybe we group them (slightly artificially) to limit this.

        From my point of view - minting an identifier without relating it to a Fedora object isn't a Fedora function and I'm not sure that there is a generic way of invoking an identifier service. A unique identifier may be derived from the object contents in some cases, much as a fixity check does - Modeshape blurs this distinction itself.

        If backup can't operate over subsets then we do have a potential scalability limitation.

        1. Good points.

          • API groups make a lot of sense. I'll get a strawman up before the next TWG meeting.
          • I think you're making an argument that ID minting itself really isn't part of the repository API except
            "wrapped" into other API actions, but maybe I'm misunderstanding you. Would you be willing to bring this up at the next TWG meeting? I think it's a good point to discuss.
          • I didn't mean that backup can't operate over subsets (I assume it can, for the same reason you did). I wrote unclearly-- my point was that it does not (to my knowledge) operate on defined sets (your phrase). In other words, there is no first-class notion of a set in Fedora 4 right now. At most, there is the hierarchy.

           

            • Grouping makes sense from the testing point of view as well as it gives fewer points to get good coverage.
            • Yes, that's my point.- can do.
            • I don't think this is insurmountable and thus not a worry right now