This allows for code to run before and after a schema fragment is
added for the first time.
The anticipated use for this is twofold:
1. To do initial setup, e.g., defining global entities.
2. To 'adopt' unmanaged attributes already defined in the store.
This 'pre' would manually alter or retract attributes so that the
transact of the new schema datoms can complete.
For example, if properties :foo/bar and :foo/baz will be unchanged,
but :noo/zob needs to change from a string to an integer, the :none
pre-function can alter the ident, and the :none post-function can
migrate and clean up.
This generalizes the transactor loop to allow callers to run
an arbitrary function within an `in-transaction!` body.
Combined with exposing `<report-transact-tx-data!`, this allows
an admittedly sophisticated consumer to conditionally query and
transact in a consistent way -- for example, cleaning up inconsistent
data then transacting a new schema version.
Altering uniqueness and cardinality attributes works, with the exception
of enabling uniqueness from nothing.
:db/noHistory and :db/isComponent changes are implemented but untested,
and aren't really supported by Datomish anyway.
The metaphor we use is that of "evolution", where each "evolutionary
step" contains a number of different "generations". Entities in the
process of being resolved are increasingly "evolved" into simpler
generations, until no further evolution is possible.
The test would fail because we would have an [a v] pair with a string
value, but we were looking for the fulltext rowid in <avs. Using
all_datoms correctly looks up the string value, at the cost of crippling
the speed of <avs.
This sorts fulltext values inserted in a single transaction, not across
transactions. This makes the rowids assigned in the fulltext_values
table internally consistent, even as the order of entities and datoms
changes (as the transaction applying algorithm evolves over time). The
test changes simply make the fulltext values sort easily.
In theory, these fulltext values could be very large, and sorting might
be very expensive. In practice, we expect values to differ in their
first few characters, so that this is efficient (i.e., proportional to
the number of fulltext values inserted and not their size).