* Pre: remove remnants of 'open_empty'
* Pre: Cleanup 'datoms' table after a timeline move
Since timeline move operations use a transactor, they generate a
"phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense
that it was never present in the 'transactions' table, and is entirely
synthetic as far as our database is concerned.
It's an implementational artifact, and we were not cleaning it up.
It becomes a problem when we start inserting transactions after a move.
Once the transactor clashes with the phantom 'tx', it will retract the
phantom 'txInstant' value, leaving the transactions log in an incorrect state.
This patch adds a test for this scenario and elects the easy way out: simply
remove the offending 'txInstant' datom.
* Part 1: Sync without support for side-effects
A "side-effect" is defined here as a mutation of a remote state as part
of the sync.
If, during a sync we determine that a remote state needs to be changed, bail out.
This generally supports different variations of "baton-passing" syncing, where clients
will succeed syncing if each change is non-conflicting.
* Part 2: Support basic "side-effects" syncing
This patch introduces a concept of a follow-up sync. If a sync generated
a "merge transaction" (a regular transaction that contains assertions
necessary for local and remote transaction logs to converge), then
this transaction needs to be uploaded in a follow-up sync.
Generated SyncReport indicates if a follow-up sync is required.
Follow-up sync itself is just a regular sync. If remote state did not change,
it will result in a simple RemoteFastForward. Otherwise, we'll continue
merging and requesting a follow-up.
Schema alterations are explicitly not supported.
As local transactions are rebased on top of remote, following changes happen:
- entids are changed into tempids, letting transactor upsert :db/unique values
- entids for retractions are changed into lookup-refs if we're confident they'll succeed
-- otherwise, retractions are dropped on the floor
* Post: use a macro for more readable tests
* Tolstoy README
This was a work-around for Tolstoy, which couldn't gracefully handle
syncing a store with a bootstrap transaction. Tolstoy now handles
that single transaction, so this is no longer necessary.
* Add a top-level "syncable" feature.
Tested with:
cargo test --all
cargo test --all --no-default-features
cargo build --manifest-path tools/cli/Cargo.toml --no-default-features
cargo run --manifest-path tools/cli/Cargo.toml --no-default-features debugcli
Co-authored-by: Nick Alexander <nalexander@mozilla.com>
* Add 'syncable' feature to 'db' crate to conditionally derive serialization for Partition*
This is leading up to syncing with partition support.
With the transition toward parsing with `rust-peg` and away from
`combine`, we're not using some of the many helpers we built to
support our unusual `combine` usage. They can just go!
I elected to keep Tolstoy using `failure::Error`, because Tolstoy
looks rather more like a high-level application (and will continue to
do so for a while) than a production-ready mid- or low-level API.
It's not possible to do meaningful clean-up (such as saving history)
if we use exit() to quit. Instead, each handled command returns a
boolean requesting exit. I elected not to allow ".exit" when
processing commands from the command line; it might be useful to
handle accept that. In general, though, REPLs that accept "-c
'commands'" on the command line exit after processing those commands,
so I'd rather think more deeply about that model than build in ".exit"
with our existing system.
I don't really understand why we were using `linefeed::Reader`
directly, but reading is not the full set of linefeed features we want
to access. I think the `linefeed::Interface` should be owned by the
`Repl`, not the `InputReader`, but it's a little awkward to share
access with that configuration, so I'm not going to lift the ownership
until I have a reason to.
I think the "--no-tty" argument might be useful for running inside
Emacs. Along the way, I made read_stdin() strip the trailing newline,
which agrees with InputReader::read_line().
What was happening is that ["[;", "]"] would get glued to "[; ]",
which of course can never complete.
It would be good to add tests of this, but the existing multi-line
`InputReader` makes that challenging and I don't want to invest the
time to improve it: I expect it to be overhauled as part of a
transition away from parsing with `combine` and toward parsing with
`rust-peg`.
This is neat, because currently at the FFI boundary we're primarily concerned
with verbalizing our errors. It doesn't matter what 'error' that's wrapped by
Result is then, as long as it can be displayed.
Once we're past the prototyping stage, it might be a good idea to formalize this.
* Make properties on NamespacedKeyword/NamespacedSymbol private
* Use only a single String for NamespacedKeyword/NamespacedSymbol
* Review comments.
* Remove unsafe code in namespaced_name.
Benchmarking shows approximately zero change.
* Allow the types of ns and name to differ when constructing a NamespacedName.
* Make symbol namespaces optional.
* Normalize names of keyword/symbol constructors.
This will make the subsequent refactor much less painful.
* Use expect not unwrap.
* Merge Keyword and NamespacedKeyword.
There are few reasons to do this:
- it's difficult to add symbol interning to combine-based parsers like
tx-parser -- literally every type changes to reflect the interner,
and that means every convenience macro we've built needs to chagne.
It's trivial to add interning to rust-peg-based parsers.
- combine has rolled forward to 3.2, and I spent a similar amount of
time investigating how to upgrade tx-parser (to take advantage of
the new parser! macros in combine that I think are necessary for
adapting to changing types) as I did just converting to rust-peg.
- it's easy to improve the error messages in rust-peg, where-as I have
tried twice to improve the nested error messages in combine and am
stumped.
- it's roughly 4x faster to parse strings directly as opposed to
edn::ValueAndSpan, and it'll be even better when we intern directly.
* Refactor AttributeCache populator code for use from pull.
* Pre: add to_value_rc to Cloned.
* Pre: add From<StructuredMap> for Binding.
* Pre: clarify Store::open_empty.
* Pre: StructuredMap cleanup.
* Pre: clean up a doc test.
* Split projector crate. Pass schema to projector.
* CLI support for printing bindings.
* Add and use ConjoiningClauses::derive_types_from_find_spec.
* Define pull types.
* Implement pull on top of the attribute cache layer.
* Add pull support to the projector.
* Parse pull expressions.
* Add simple pull support to connection objects.
* Tests for pull.
* Compile with Rust 1.25.
The only choice involved in this commit is that of replacing the
anonymous lifetime '_ with a named lifetime for the cache; since we're
accepting a Known, which includes the cache in question, I think it's
clear that we expect the function to apply to any given cache
lifetime.
* Review comments.
* Bail on unnamed attribute.
* Make assert_parse_failure_contains safe to use.
* Rework query parser to report better errors for pull.
* Test for mixed wildcard and simple attribute.
We don't yet have a logging system for production use, but I'd like to
start experimenting with log, which seems to be (close to) a Rust
standard. We're already using it in mentat_cli.
* Use the cache to make constant queries super fast.
* Fix translate tests to match: we no longer generate SQL for many of them!
* Accumulate additions and removals into the cache.
* Make attribute cache clone-on-write; store it in Metadata.
* Allow caching of fulltext attributes, interning strings.
* Add a prepared query command to CLI.
* Print nanoseconds in the REPL. This is a good problem to have.
* Better CLI timing.
* Use release for 'cargo cli', debug for 'cargo debugcli'.
* Don't enable debug symbols in release builds.
* Clean up CLI code. Fixed order for help.
* Column-align help output.
This puts caching in mentat_db, adds a reverse lookup capability for
unique attributes, and populates bidirectional caches with a single
SQL cursor walk.
Differentiate between begin_read and begin_uncached_read.
Note that we still allow toggling within InProgress, because there might be
transient local state that makes starting a new transaction impossible.