OSF Revisioning Design

Introduction
A record revisioning system for OSF is a set of new mechanisms that will enable users and developers to create revisions (versions) of individual OSF records. All versions of a record's description will be saved in a different revisions dataset. Users will be able to check older revisions for a particular record, they will be able to list all revisions of a record. They will also be able to check differences between versions and they will be able to check future versions to be published for a record. This document outline the design of the OSF Revisioning system.

Required Behaviors
A record revisioning system should at least meet these requirements by being able to:
 * Get a complete list of all revisions for a record
 * Specify if we want to create a new revision, or not, for a record that is being updated
 * This would only be possible if the record to update is already published
 * Delete all revisions for a record
 * Delete a single revision for a record
 * Revert the record currently published in the dataset to a previous revision
 * Update a specific revision status of the record
 * 'Diff' two revisions of a record
 * Create unpublished revisions that waits to be moderated. These unpublished revisions are revisions that are more recent than the current version of a record available in the dataset, and will eventually replace it after moderation.
 * Revision reified statements of the records.

Revisioning Method
The revisioning method designed in this document is one that will save the complete description of a record every time a new revision is being created. This means that every time a new revision is created, all the triples, including reified triples, will be saved in the revision. An alternative method would be to save the triples that changed (so, the difference between two records' description) using a method such as the one that uses the ChangeSet.

Now, let's outline the advantages and disadvantages of the revisioning method outlined in this document.

Advantages

 * 1) Less space consumed for smaller records with a lot of changes per revision
 * 2) Reverting to a previous revision is fast since the complete state of a record exists in its revision record; a single read query is required
 * 3) Comparing two non-concurrent revisions is faster than with the ChangeSet method since the time to compare these two non-concurrent revisions is the same as if they were concurrent.
 * 4) Can easily revision reification statements.

Disadvantages

 * 1) More space consumed for big records with a small number of changes per revision
 * 2) Comparing two concurrent revisions needs to be done at runtime with the RDF Diff API.

Revisions Scenarios & Structures
In this section we outline different revisioning scenarios that can happen, and for each of these scenario, we show what the revision structure looks like.

Basic Revision
This is the most basic scenario of the revisioning system. We have three revisions for a single record. The last revision is the one that is published on the different portals.

Revisioning adding a new unpublished revision
This second revisioning scenario is one where the revision that is currently published on the portals is not the last revision of the record. This scenario means that there exists a more recent revision for this record that is not yet published on the portals. It is probably waiting for approval in a governance workflow.

Revisioning reverting to a previous revision
This other revisioning scenario shows what happens when a user chooses to re-publish an older version of a record. This means that the  revision still exists in the revisioning system, but that it is not published on the portals anymore. It is the  revision that is now exposed on the portals.

Revisioning deleting an existing revision
This scenario shows the impact of deleting a revision in the middle of a sequence of revisions. If  would be the published revision, an error would be returned to the requester telling him that he has to publish another revision if he wants to be able to delete that revision.

Revisioning Graph
Every time that a new dataset is created in PSF, a new "revisions" dataset is created at the same time. This dataset is where all the revisions of the records will be saved. The rules for creating the revisions datasets, and to create the revisions records are simple: Here is an example of what these two datasets looks like, and what are the relations between the two: What this schema shows is that the record that is currently available in the dataset is the  revision. As we saw above, the published record (the one that is available in the dataset) is not necessary the last revision. If the  revision is eventually published, then the current record in the dataset will be deleted and replaced by the   record. Then the  pointer will now be targeting the   revision record.
 * For each dataset in OSF Web Services we have a dataset where all the records revisions are instantiated
 * The convention to create a revision dataset URI is to add at the end of the dataset's URI.
 * Ex:  will have a revision dataset with this URI
 * The convention to create a revision's URI is to append the MD5 value of the concatenation of the  +   +   to the URI of the revisions datasets
 * Ex:

Revisioning Vocabulary
These additions require adding new vocabulary to the WSF Ontology (Web Service Framework Ontology), the ontology for describing instances of the OSF Web services framework. This new revisioning vocabulary is:
 * Classes
 * Properties
 * This is the URI of the record to be revisioned
 * This is the URI of the dataset where this record is published
 * This is the Unix time stamp (which includes microseconds) when the revision got created
 * As shown in the revisions structure above, the revisions are ordered in a linear time series. This means that the sequence of revisions is determined by the time when they got created. The sequence can be re-created by ordering them by these time stamps
 * The value of this property is filled at creation time of the revision.
 * Refers to the user that did the change
 * Specify the current status of the revision
 * Named Invididuals
 * Of type
 * Specify that the revision is the one, within the revisions sequence, that is currently published in the dataset
 * Refers to the user that did the change
 * Specify the current status of the revision
 * Named Invididuals
 * Of type
 * Specify that the revision is the one, within the revisions sequence, that is currently published in the dataset
 * Of type
 * Specify that the revision is the one, within the revisions sequence, that is currently published in the dataset
 * Specify that the revision is the one, within the revisions sequence, that is currently published in the dataset

Revision Example
Here is an example of a published record. Below you have one of the revisions that exists for that same record. This shows how the revisioning vocabulary is used to describe revisions saved into the revisions graph.

As you can see below with the revision record:
 * All of the triples of the published record are part of the revision's record description
 * This enable us to analyze all the revision records using SPARQL queries
 * The URI of the revision record is different
 * All the additional triples required by the revisioning system is constrained in the  ontology namespace
 * This means that if we want to recreate the initial state of the record that leaded to a particular revision, we can easily do this by:
 * Changing the URI of the revision by the URI value of the  property
 * By removing all the revisioning properties and the  class assertion

Diff Algorithm
One of the requirements is to be able to differentiate two given revisions of the same record. This functionality will be exposed as a new Web service as outlined below. This Web service will compare two revisions of a same record, and will outline all the changes between the two revisions as a.

The basic RDF Diff algorithm that will be implemented is: Then the new Web service endpoint will return that ChangeSet in its resultset.
 * Get as input the two complete descriptions of a same record, but for different revisions. Refer to them as  and
 * Parse both  and   into two sets of triples
 * Iterate over the triples
 * For each  triple, look for an identical triple in the   set. If you don't find a match, reify that   triple as an rdf:Statement, and add that rdf:Statement as   to the ChangeSet.
 * Iterate over the triples
 * For each  triple, look for an identical triple in the   set. If you don't find a match, reify that   triple as an rdf:Statement, and add that rdf:Statement as   to the ChangeSet.

Revisioning and CRUD Web Service Endpoints Overview
This is a summary and overview of the different Revisioning and existing CRUD Web service endpoints, outlining the roles and goals of each endpoint in this new revisioning and publication environment:
 * It is used to create the first version of a record. The first version of a record is indexed in the core dataset, and not (initially) into the revisions dataset.
 * It is used to reload the Solr index with the description of the published version of the record(s)
 * – Index in both the triple store (Virtuoso) and search index
 * – Index in the triple store (Virtuoso) only. This mode cannot be used if the record is already existing.
 * – Re-index the records in the search index (Solr) using the triples currently indexed into the triple store. This mode can only be used on published records. The payload of this query can be composed of records that only have a single type triple since the other information won't be used by the endpoint to populate the search index.
 * Note about this mode : if a record get unpublished, but that a revision still exits for that record, then a " " error will be returned by the endpoint. The reason for this behavior is that only published records can be reloaded into Solr using this mode. If this won't be the case, if we we could reload the Solr index with unpublished records, then this would mean that unpublished record could be visible in the  endpoint, but they won't be visible on any other endpoints such  . This is the reason why this mode can only be used on published records, otherwise inconsistencies between published and unpublished records would arise.
 * It is used to delete a published record from the core dataset. Then the endpoint exposes two options: to delete all the revisions for that record at the same time, or to only delete the published record in the core dataset while keeping the revisions in the revision graph (this means that a record could be restored by marking one of its revision as published, which would re-create it in the core dataset)
 * 
 * It is used to read the published revision of a record in the core dataset
 * It is used to update the published version of a record
 * It is also used to create new (unpublished) revisions of a record. These revision would be potential future published revisions of the record
 * It is used to read (get all the triples) of a specific revision record
 * It is used to delete a specific revision record
 * 
 * It is used to update the lifecycle stage of a revision
 * It is used to get the full listing of revisions for a given record
 * 
 * It is used to compare two revisions of a same record
 * 
 * It is used to update the lifecycle stage of a revision
 * It is used to get the full listing of revisions for a given record
 * 
 * It is used to compare two revisions of a same record
 * It is used to compare two revisions of a same record