This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
If two synchronizing clients operate simultaneously, or begin recording data prior to their first synchronization, two divergent timelines exist.
While we might sometimes reconcile these two timelines by destructive rebasing — that is, by pretending that the operations on one timeline occurred after those on the other, and perhaps less granularly — doing so will lose historical data (intermediate states will be flattened), revises the supposedly immutable log, and either leaves confusing timestamps or breaks the rough correspondence between transaction IDs and txInstant
.
The alternative is to somehow represent a merge… but to represent a merge we need a way to represent the two timelines that we are merging. Mentat is not currently capable of representing more than one timeline.
This document proposes one way in which it could. This would make Mentat a database that can natively represent, model, and accommodate divergence and conflict over time, particularly during synchronization.
Labeling timelines
It is convenient for us to have a name for our primary timeline (the one that we publish and on which all synchronized clients agree) and for each secondary timeline (the ones built by each client between synchronization points, which will either be promoted to primary, rebased onto primary, or merged into primary at the next sync).
Let’s call the primary timeline α. We’ll use other Greek letters for subsequent divergent timelines, though we should only need β.
Representing timelines
A Mentat datom is a five-tuple: a transaction ID, an operation (add or remove), and an entity, attribute, and value. These don’t directly offer a way to denote a timeline.
Let’s conceptually associate each transaction with a timeline identifier. A timeline identifier is simply another integer. We will use 0
for α. For every other timeline we will use the transaction ID of the merge. Between syncs we will use 0
for the in-progress timeline: in the routine fast-forward case this is simplest (no rewriting is necessary), and it also reflects the “temporary primality” of the timeline, as you’ll see later when we discuss querying and materialization.
We could do this by defining another table that associates txid
and timeline_id
. For convenience we will instead add an additional column to transactions
. We don’t need to add a column to datoms
, because non-primary datoms won’t be materialized.
We will now define datoms
(which is a materialized view) as rollup(filter(transactions, |tx| tx.timeline_id == 0))
.
A consumer that looks only at timeline_id = 0
will see a straightforward, linear, published history that corresponds to the current state of the shared world.
An offline consumer — one that has never synced — will have a single, primary, timeline and no divergent timelines. Every datom will occur on α.
Diverging and merging
When datoms are transacted, Mentat allocates a transaction ID. The new datoms are written to the log along with the timeline_id
0
.
Because we use 0
as the timeline_id
, these divergent local datoms are (for now) primary: they’re visible in the log and in the datoms table. Mentat behaves as usual. We can call this the “local primary”.
When a sync occurs and there are no remote changes (a “fast-forward merge”), we directly upload the local transactions. Our prediction of 0
as the timeline_id
was correct.
(When there are only remote changes, we similarly fast-forward the local database.)
When a sync occurs in which both remote and local changes are seen, the following steps are taken (conceptually, if not necessarily literally — there are various short-cuts that will make common merges easier).
We will term the shared parent α, the new remote state α’, and the new local state β.
- The datoms of each divergent local transaction (
(α, β]
) are reverted in thedatoms
table. - The remote changes (
(α, α’]
) are applied to the transaction log anddatoms
. - The local transactions are not renumbered: it’s OK for them to share a transaction ID with transactions on a different timeline. However, we might choose to have them occupy a unique space of transaction IDs in order to make later querying more straightforward.
- The ‘merge commit’ (terminology that’s hard to resist!) is computed. This is the set of datoms that need to be transacted on top of α to reach the desired merged state
M(into: α’, from: β, against: α)
. (N.B., a merge necessarily operates on states but produces transactions.) - The timeline ID of each local transaction (
(α, β]
) is rewritten to the transaction ID of the merge commit. - The merge commit is atomically transacted on α (with
timeline_id = 0
) and published. It can be explicitly annotated with source timelines, merge metadata, etc., if we wish, again to support querying and forensics. The same approach can be used to record automatically resolved conflicts, allowing forward progress without permanently losing conflicting data; this is the approach taken by CouchDB. - Those local transactions are published. Other clients can apply them and have the same primary (‘history’) and secondary (‘alternative history’) states.
Postconditions:
- The
datoms
table includes the merge commit, derived from the local divergence, because it is present on α. SELECT * FROM transactions WHERE timeline_id = 0
includes all published datoms, including those from the merge, plus any unsynced datoms.SELECT * FROM transactions WHERE timeline_id = 0 AND tx_id < $remote_head
limits the log to only published datoms, excluding unsynced datoms.- The transaction log includes sufficient raw data to represent the divergence for later exploration or broad search.
- Every merge commit is a merge between
0
and the timeline labeled with itstx_id
.
Advantages:
- We only need one transaction log, and the changes to Mentat are minimal.
- No additional space of timeline IDs is needed, and representation of timeline IDs is compact (one or two bytes per datom, depending on how clever we are in e.g., subtracting the base transaction ID from the timeline ID).
- All data is preserved for debugging, querying, etc.
- The primary timeline is linear and obvious — it’s every transaction on timeline 0.
Disadvantages:
- Queries over the transaction log must filter by timeline; it includes data that is not part of the primary timeline.
- Merges involve re-labeling each local source datom. (Though this is likely to be a minority of the work involved!)
- Space is not routinely recovered from local history, though pruning of non-primary timelines is a good place to discard data.
- Non-primary history is most definitely a second-class citizen: without further work it isn’t materialized and is not trivial to query.