Sync needs to operate over a "mentat transaction", not just a "db transaction".
This shuffle allows internal mentat crates to consume InProgress, which models
the concept of a "mentat transaction".
Being able to derive partition map from partition definitions and current
state of the world (transactions), segmented by timelines, is useful
because it lets us not worry about keeping materialized partition maps
up-to-date - since there's no need for materialized partition maps at that point.
This comes in very handy when we start moving chunks of transactions off of our mainline.
Alternative to this work would look like materializing partition maps per timeline,
growing support for incremental "backwards update" of the materialized maps, etc.
Our core partitions are defined in 'known_parts' table during bootstrap,
and what used to be 'parts' table is a generated view that operates over
transactions to figure out partition index.
'parts' is defined for the main timeline. Querying parts for other timelines
or for particular timeline+tx combinations will look similar.
Normally we want to both materialize our changes (into 'datoms')
as well as commit source transactions into 'transactions' table.
However, when moving transactions from timeline to timeline
we don't want to persist artifacts (rewind assertions), just their
materializations.
This patch expands the 'db' interface to allow for this split,
and changes transactor's functions to take a crate-private 'action'
which defines desired behaviour.
This is necessary for the timelines work ahead. When schema is being
moved off of a main timeline, we need to be able to retract it cleanly.
Retractions are only processed if the whole defining attribute set
is being retracted at once (:db/ident, :db/valueType, :db/cardinality).
Timelines work starts to perform modifications on the partitions
that go beyond simple allocations. This change pre-emptively protects
partition integrity by asserting that index modifications are legal.
Generally, I think that Mentat is using too many small traits rather
than wrapping types into newtypes. Wrapping into newtypes is cheap in
Rust, and it makes it easier to reason about the code.
* Part 1: Extract low-level test framework into mentat_db::debug for re-use.
* Part 2: Improve assert_matches!.
This corrects an incorrect pattern: a conversion method taking &self
but returning an owned value should be named like `to_FOO(&self) -> FOO`. (A
reference-to-reference conversion should be named like `as_FOO(&self)
-> &FOO`. A consuming conversion should be named like `into_FOO(self)
-> FOO`.)
In addition, this pushes the conversion via `to_edn` into the
`assert_matches!` macro, which lets consumers get a real data
structure (say, `Datoms`) and use it directly before or after
`assert_matches!`. (Currently, consumers get back `edn::Value`
instances, which aren't nearly as pleasant to use as real data
structures.)
Co-authored-by: Grisha Kruglov <gkruglov@mozilla.com>
* Part 3: Use mentat_db::debug framework in Tolstoy crate.
The advantage of this approach is that compiling Tolstoy (or anything
that's not db, really) can be quite a bit faster than compiling db.
* Add a top-level "syncable" feature.
Tested with:
cargo test --all
cargo test --all --no-default-features
cargo build --manifest-path tools/cli/Cargo.toml --no-default-features
cargo run --manifest-path tools/cli/Cargo.toml --no-default-features debugcli
Co-authored-by: Nick Alexander <nalexander@mozilla.com>
* Add 'syncable' feature to 'db' crate to conditionally derive serialization for Partition*
This is leading up to syncing with partition support.
This is all part of moving the entity builder away from building term
instances and toward building entity instances. One of the nice
things that the existing term interface does is allow consumers to use
lightweight reference counted tempid handles; I don't want to lose
that, so we'll build it into the entity data structures directly.
We haven't observed performance issues using `Arc` instead of `Rc`,
and we want to be able to include things that are interned (including,
soon, `TempId` instances) in errors coming out of the
transactor. (And `Rc` isn't `Sync`, so it can't be included in errors
directly.)
It's not great to keep lifting functionality higher and higher up the
crate hierarchy, but we really do want to intern while we parse.
Eventually, I expect that we will split the `edn` crate into `types`
and `parsing`, and the `types` crate can depend on a more efficient
interning dependency.
* Delete the (apparently unused) EntId
* Rename edn's Entid to EntidOrIdent to avoid confusion with the Entid that's actually an i64
* Fix travis beta bustage (This is actually unrelated to entids, but is a trivial fix nonetheless)
* Part 3: Parameterize Entity by value type.
This isn't quite right, because after parsing, we shouldn't care
about` `edn::ValueAndSpan`, we should care only about edn::Value.
However, I think we can drop `ValueAndSpan` entirely if we just use
`rust-peg` (and its simpler error messages) rather than a mix of
`rust-peg` and `combine`.
In any case, this paves the way to transacting `Entity<TypedValue>`,
which is a nice step towards building general entities.
* Part 1: Add AttributePlace.
* Part 2: Name other places EntityPlace and ValuePlace.
Now we're consistent and closer to self-documenting. Both matter more
as we expose `Entity` as the thing to build for programmatic usage.
* Part 4: Allow Ident and TempId in ValuePlace.
The parser will never produce these, since determining whether an
integer/keyword or string is an ident or a tempid, respectively, in
the value place requires the schema.
But a builder that produces `Entity` instances directly will want to
produce these.
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
9a9dfb502a/src/common/datomish/transact.cljc (L436))
and then allocated complex upserts if they didn't resolve (see
9a9dfb502a/src/common/datomish/transact.cljc (L509)).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
* Make properties on NamespacedKeyword/NamespacedSymbol private
* Use only a single String for NamespacedKeyword/NamespacedSymbol
* Review comments.
* Remove unsafe code in namespaced_name.
Benchmarking shows approximately zero change.
* Allow the types of ns and name to differ when constructing a NamespacedName.
* Make symbol namespaces optional.
* Normalize names of keyword/symbol constructors.
This will make the subsequent refactor much less painful.
* Use expect not unwrap.
* Merge Keyword and NamespacedKeyword.
There are few reasons to do this:
- it's difficult to add symbol interning to combine-based parsers like
tx-parser -- literally every type changes to reflect the interner,
and that means every convenience macro we've built needs to chagne.
It's trivial to add interning to rust-peg-based parsers.
- combine has rolled forward to 3.2, and I spent a similar amount of
time investigating how to upgrade tx-parser (to take advantage of
the new parser! macros in combine that I think are necessary for
adapting to changing types) as I did just converting to rust-peg.
- it's easy to improve the error messages in rust-peg, where-as I have
tried twice to improve the nested error messages in combine and am
stumped.
- it's roughly 4x faster to parse strings directly as opposed to
edn::ValueAndSpan, and it'll be even better when we intern directly.
This is a stepping stone to transacting entities that are not based on
`edn::ValueAndSpan`. We need to turn some value places (general) into
entity places (restricted), and those restrictions are captured in
tx-parser right now. But for `TypedValue` value places, those
restrictions are encoded in the type itself. This lays the track to
accept other value types in value places, which is good for
programmatic builder interfaces.
* Refactor AttributeCache populator code for use from pull.
* Pre: add to_value_rc to Cloned.
* Pre: add From<StructuredMap> for Binding.
* Pre: clarify Store::open_empty.
* Pre: StructuredMap cleanup.
* Pre: clean up a doc test.
* Split projector crate. Pass schema to projector.
* CLI support for printing bindings.
* Add and use ConjoiningClauses::derive_types_from_find_spec.
* Define pull types.
* Implement pull on top of the attribute cache layer.
* Add pull support to the projector.
* Parse pull expressions.
* Add simple pull support to connection objects.
* Tests for pull.
* Compile with Rust 1.25.
The only choice involved in this commit is that of replacing the
anonymous lifetime '_ with a named lifetime for the cache; since we're
accepting a Known, which includes the cache in question, I think it's
clear that we expect the function to apply to any given cache
lifetime.
* Review comments.
* Bail on unnamed attribute.
* Make assert_parse_failure_contains safe to use.
* Rework query parser to report better errors for pull.
* Test for mixed wildcard and simple attribute.
This innocuous looking change (upserts_ev -> upserts_e -> resolved in
all situations, rather than upserts_ev -> resolved in some situations)
is a significant change in semantics and assumptions in the
transactor. Witness the large comment being removed about the same
tempid resolving in different generations!
To support this change, we provide more holistic errors for
conflicting upserts, which entails collecting some (relatively
expensive) diagnostic data.
I left in some debug logging, simply since it shouldn't hurt in
general, and will likely be useful for the next bug we see in the
transactor.
We don't yet have a logging system for production use, but I'd like to
start experimenting with log, which seems to be (close to) a Rust
standard. We're already using it in mentat_cli.
:db/tx (and Datomic's version, :datomic/tx) suffer from the same
ambiguities that [a v] lookup references do -- determining the type of
the result is context sensitive. (In this case, is :db/tx a reference
to the current transaction ID, or is it a valid keyword?) This commit
addresses the ambiguity by introducing a notion of a transaction
functions, and provides a little scaffolding for adding more (should
the need arise). I left the scaffolding in place rather than handling
just (transaction-tx) because I started trying to
implement (transaction-instant) as well, which is more difficult --
see the comments.
It's worth noting that this approach generalizes more or less directly
to ?input variables, since those can be eagerly bound like the
implemented transaction function (transaction-tx).
* Pre: eliminate some occurrences of Rc, largely through the magic of Into.
* Pre: introduce FromRc to convert between refcounted types.
* Introduce ValueRc as an abstraction over Rc/Arc choice.
* Move Cloned to core.
* Move CString-creation methods to TypedValue.
* Finish transition.