Commit graph

21 commits

Author SHA1 Message Date
Nick Alexander
cbffe5e545 Use rust-peg for tx parsing.
There are few reasons to do this:

- it's difficult to add symbol interning to combine-based parsers like
  tx-parser -- literally every type changes to reflect the interner,
  and that means every convenience macro we've built needs to chagne.
  It's trivial to add interning to rust-peg-based parsers.

- combine has rolled forward to 3.2, and I spent a similar amount of
  time investigating how to upgrade tx-parser (to take advantage of
  the new parser! macros in combine that I think are necessary for
  adapting to changing types) as I did just converting to rust-peg.

- it's easy to improve the error messages in rust-peg, where-as I have
  tried twice to improve the nested error messages in combine and am
  stumped.

- it's roughly 4x faster to parse strings directly as opposed to
  edn::ValueAndSpan, and it'll be even better when we intern directly.
2018-05-10 10:24:05 -07:00
Nick Alexander
32ed56685e (tx) Replace :db/tx with (transaction-tx) transaction function and broaden support. (#664)
:db/tx (and Datomic's version, :datomic/tx) suffer from the same
ambiguities that [a v] lookup references do -- determining the type of
the result is context sensitive.  (In this case, is :db/tx a reference
to the current transaction ID, or is it a valid keyword?)  This commit
addresses the ambiguity by introducing a notion of a transaction
functions, and provides a little scaffolding for adding more (should
the need arise).  I left the scaffolding in place rather than handling
just (transaction-tx) because I started trying to
implement (transaction-instant) as well, which is more difficult --
see the comments.

It's worth noting that this approach generalizes more or less directly
to ?input variables, since those can be eagerly bound like the
implemented transaction function (transaction-tx).
2018-04-26 19:32:14 -07:00
Richard Newman
1817ce7c0b Performance and cleanup. r=emily
* Use fixed-size arrays for bootstrap datoms, not vecs.
* Wide-ranging cleanup.

    This commit:
    - Deletes some dead code.
    - Marks some functions only used by tests as cfg(test).
    - Adds pub(crate) to a bunch of functions.
    - Cleans up a few other nits.
2018-03-06 09:03:00 -08:00
Edouard Oger
3bf7459315 Allow customers to assert facts about the current transaction. (#225) r=rnewman
Also move `now` into core, implement microsecond truncation.

This is so we don't return a more granular -- and thus subtly different --
timestamp in a `TxReport` than we put into the store.
2018-01-29 16:46:04 -08:00
Richard Newman
ef9f2d9c51 Don't allow violation of cardinality-one restrictions within a single tx. (#531) r=nalexander 2018-01-23 15:11:38 -08:00
Richard Newman
6797a606b5 Preliminary work for vocabulary management. r=emily,nalexander
Pre: export AttributeBuilder from mentat_db.
Pre: fix module-level comment for tx/src/entities.rs.
Pre: rename some `to_` conversions to `into_`.
Pre: make AttributeBuilder::unique less verbose.
Pre: split out a HasSchema trait to abstract over Schema.
Pre: rename SchemaMap/schema_map to AttributeMap/attribute_map.
Pre: TypedValue/NamespacedKeyword conversions.
Pre: turn Unique and ValueType into TypedValue::Keyword.
Pre: export IntoResult.
Pre: export NamespacedKeyword from mentat_core.
Pre: use intern_set in tx.
Pre: add InternSet::len.
Pre: comment gardening.
Pre: remove inaccurate TODO from TxReport comment.
2018-01-23 08:25:32 -08:00
Thom
9740cafdbd Automatically remove trailing whitespace from text files. (#527) r=rnewman
This was done using the following shell script:

```
find . -type f -not -path "*target*" \
       '(' -name '*.rs' -o -name '*.md' -o -name '*.toml' ')' -print0 | \
    xargs -0 sed -i '' -E 's/[[:space:]]*$//'
```

Which is admittedly imperfect, but manages to hit everything that was a problem in this repo.
2018-01-19 21:21:04 -06:00
Nick Alexander
59a710f80f Review comments: another test, add unreversed(). 2017-06-08 10:30:31 -07:00
Nick Alexander
d88823e7c4 Handle :attribute/_reverse in transactor. Fixes #187
There are two broad approaches:

1) Handle reverse attribute notation dynamically, in the style that
   Datomic does.  This is the most flexible, but it's not a good fit
   given that we produce strongly typed output from the parser.
   Strongly typed input to the transactor has had many benefits, so I
   don't want to roll it back for a relatively unimportant feature
   like reverse notation -- especially not since Mentat does not
   require :db.install/_attribute to modify schema attributes.

2) Handle reverse attribute in the parser itself, so that we can
   produce strongly typed parser output while restricting the input.
   I implemented this first and discovered that it's very difficult to
   give sensible error messages in common cases.

In any case, the bulk of the code is the same between the two
approaches, and I wrote the tests for the dynamic version (with error
output), so that's what I'm rolling with.

This patch preserves the existing indentation, to highlight the
differences.  The next patch will indent.
2017-06-08 10:30:31 -07:00
Nick Alexander
5369f03464 Improve parsing of nested edn::ValueAndSpan streams. r=rnewman (#393)
* Pre: Expose more in edn.

* Pre: Make it easier to work with ValueAndSpan.

with_spans() is a temporary hack, needed only because I don't care to
parse the bootstrap assertions from text right now.

* Part 1a: Add `value_and_span` for parsing nested `edn::ValueAndSpan` instances.

I wasn't able to abstract over `edn::Value` and `edn::ValueAndSpan`;
there are multiple obstacles.  I chose to roll with
`edn::ValueAndSpan` since it exposes the additional span information
that we will want to form good error messages in the future.

* Part 1b: Add keyword_map() parsing an `edn::Value::Vector` into an `edn::Value::map`.

* Part 1c: Add `Log`/`.log(...)` for logging parser progress.

This is a terrible hack, but it sure helps to debug complicated nested
parsers.  I don't even know what a principled approach would look
like; since our parser combinators are so frequently expressed in
code, it's hard to imagine a data-driven interpreter that can help
debug things.

* Part 2: Use `value_and_span` apparatus in tx-parser/.

I break an abstraction boundary by returning a value column
`edn::ValueAndSpan` rather than just an `edn::Value`.  That is, the
transaction processor shouldn't care where the `edn::Value` it is
processing arose -- even we care to track that information we should
bake it into the `Entity` type.  We do this because we need to
dynamically parse the value column to support nested maps, and parsing
requires a full `edn::ValueAndSpan`.  Alternately, we could cheat and
fake the spans when parsing nested maps, but that's potentially
expensive.

* Part 3: Use `value_and_span` apparatus in query-parser/.

* Part 4: Use `value_and_span` apparatus in root crate.

* Review comment: Make Span and SpanPosition Copy.

* Review comment: nits.

* Review comment: Make `or` be `or_exactly`.

I baked the eof checking directly into the parser, rather than using
the skip and eof parsers.  I also took the time to restore some tests
that were mistakenly commented out.

* Review comment: Extract and use def_matches_* macros.

* Review comment: .map() as late as possible.
2017-04-06 10:06:28 -07:00
Nick Alexander
4b874deae1 Lookup refs, nested vector values, map notation. Fixes #180, fixes #183, fixes #284. (#382) r=rnewman
* Pre: Fix error in parser macros.

* Pre: Make test unwrapping more verbose.

* Pre: Make lookup refs be (lookup-ref a v) in the entity position.

This has the advantage of being explicit in all situations and
unambiguous at parse-time.  This choice agrees with the Clojure
implementation but not with Datomic.  Datomic treats [a v] as a lookup
ref, is ambiguous at parse-time, and is disambiguated in ways I do not
understand at transaction time.  We mooted making lookup refs [[a v]]
and outlawing nested value vectors in transactions, but after
implementing that approach I decided it was better to handle lookup
refs at parse time and therefore outlawing nested value vectors is not
necessary.

* Handle lookup refs in the entity and value columns. Fixes #183.

* Pre 0a: Use a stack instead of into_iter.

* Pre 0b: Dedent.

* Pre 0c: Handle `e` after `v`.

This allows to use the original `e` while handling `v`.

* Explode value lists for :db.cardinality/many attributes. Fixes #284.

* Parse and accept map notation. Fixes #180.

* Pre: Modernize add() and retract() into one add_or_retract().

* Pre: Add is_collection and is_atom to edn::Value.

* Pre: Differentiate atoms from lookup-refs in value position.

Initially, I expected to accept arbitrary edn::Value instances in the
value position, and to differentiate in the transactor.  However, the
implementation quickly became a two-stage parser, since we always
wanted to parse the resulting value position into some other known
thing using the tx-parser.  To save calls into the parser and to allow
the parser to move forward with a smaller API surface, I push as much
of this parsing as possible into the initial parse.

* Pre: Modernize entities().

* Pre: Quote edn::Value::Text in Display.

* Review comment: Add and use edn::Value::into_atom.

* Review comment: Use skip(eof()) throughout.

* Review comment: VecDeque instead of Vec.

* Review comment: Part 0: Rename TempId to TempIdHandle.

* Review comment: Part 1: Differentiate internal and external tempids.

This breaks an abstraction boundary by pushing the Internal/External
split up to the Entity level in tx/ and tx-parser/.  This just makes
it easier to explode Entity map notation instances into Entity
instances, taking an existing External tempid :db/id or generating a
new Internal tempid as appropriate.  To do this without breaking the
abstraction boundary would require adding flexibility to the
transaction processor: we'd need to be able to turn Entity instances
into some internal enum and handle the two cases independently.  It
wouldn't be too hard, but this reduces the combinatorial type
explosion.
2017-03-27 16:30:04 -07:00
Nick Alexander
1801db2a77 Convert EDN transaction tests to Rust code. Fixes #271. (#364) r=rnewman
* Pre: Order datoms deterministically in debug output.

This makes comparison much easier, and avoids a whole class of
difficult problems when introducing pattern matching with placeholder
values.

* Pre: Don't rewrite ?txN and ?msN in debug module into_edn() methods.

* Convert EDN transaction tests to Rust code. Fixes #271.

This implements
https://github.com/mozilla/mentat/issues/271#issuecomment-283125963.
I'm using the EDN pattern matching functionality
internally (extensively!), but specifically working around the tricky
edges we encountered.  This should let us implement tests quickly (and
hopefully legibly) while not requiring us to encode as much behaviour
into non-standard EDN notations.
2017-03-20 11:29:17 -07:00
Nick Alexander
dcd9bcb1ce Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341)
* Pre: Drop unneeded tx0 from search results.

* Pre: Don't require a schema in some of the DB code.

The idea is to separate the transaction applying code, which is
schema-aware, from the concrete storage code, which is just concerned
with getting bits onto disk.

* Pre: Only reference Schema, not DB, in debug module.

This is part of a larger separation of the volatile PartitionMap,
which is modified every transaction, from the stable Schema, which is
infrequently modified.

* Pre: Fix indentation.

* Extract part of DB to new SchemaTypeChecking trait.

* Extract part of DB to new PartitionMapping trait.

* Pre: Don't expect :db.part/tx partition to advance when tx fails.

This fails right now, because we allocate tx IDs even when we shouldn't.

* Sketch a db interface without DB.

* Add ValueParseError; use error-chain in tx-parser.

This can be simplified when
https://github.com/Marwes/combine/issues/86 makes it to a published
release, but this unblocks us for now.  This converts the `combine`
error type `ParseError<&'a [edn::Value]>` to a type with owned
`Vec<edn::Value>` collections, re-using `edn::Value::Vector` for
making them `Display`.

* Pre: Accept Borrow<Schema> instead of just &Schema in debug module.

This makes it easy to use Rc<Schema> or Arc<Schema> without inserting
&* sigils throughout the code.

* Use error-chain in query-parser.

There are a few things to point out here:

- the fine grained error types have been flattened into one crate-wide
  error type; it's pretty easy to regain the granularity as needed.

- edn::ParseError is automatically lifted to
  mentat_query_parser::errors::Error;

- we use mentat_parser_utils::ValueParser to maintain parsing error
  information from `combine`.

* Patch up top-level.

* Review comment: Only `borrow()` once.
2017-02-24 15:33:48 -08:00
Richard Newman
a10f68fdb7 Mark every project as being part of the workspace. r=nalexander
This allows `cargo test --all` to work.
2017-02-20 11:04:08 -08:00
Nick Alexander
16e9740d8a Implement upsert resolution algorithm. (#186, #283). r=rnewman, f=jsantell
* Pre: Implement batch [a v] pair lookup.

* Pre: Add InternSet for sharing ref-counted handles to large values.

* Pre: Derive more for Entity.

* Pre: Return DB from creating; return TxReport from transact.

I explicitly am not supporting opening existing databases yet, let
alone upgrading databases from earlier versions.  That can follow fast
once basic transactions are supported.

* Pre: Parse string temporary ID entities; remove ValueOrLookupRef.

This adds TempId entities, but we can't disambiguate String temporary
IDs from values without the use of the schema, so there's no new value
branch.  Similarly, we can't disambiguate lookup-ref values from two
element list values without a schema, so we remove this entirely.
We'll handle the ambiguity later in the transactor.

* Persist partitions to SQL store; allocate transaction ID. (#186)

* Post: Test upserting with vectors.

This converts an existing test to EDN:
84a80f40f5/test/datomish/db_test.cljc (L193).

* Implement tempid upsert resolution algorithm. (#184)

* Post: Separate Tx out of DB.

This is very preliminary, since we don't have a real connection type
to manage transactions and their metadata yet.

* Post: Comment on implementation choices in the transactor.

* Review comment: Put long use lists on separate lines.

* Review comment: Accept String: Borrow<S> instead of just String.

* Review comment: Address nits.
2017-02-14 16:50:40 -08:00
Jordan Santell
4e81733eed Change expecteddatoms and expectedtransaction to their kebab-case counterparts, for valid EDN style. Fixes #270. r=nalexander (#287) 2017-02-11 12:06:09 -08:00
Jordan Santell
21f7bdf493 Consolidate Entity::{Add, Retract} to Entity::AddOrRetract. Fixes #255. r=nalexander (#265) 2017-02-08 15:45:09 -08:00
Jordan Santell
9fcf9f3318 Remove Entity::{RetractEntity, RetractAttribute} for now. Fixes #257. r=nalexander (#266) 2017-02-08 15:19:47 -08:00
Nick Alexander
afafcd64a0 [tx] Start implementing bulk SQL insertion algorithms (#214). r=rnewman,jsantell
* Pre: Add some value conversion tests.

This is follow-up to earlier work.  Turn TypedValue::Keyword into
edn::Value::NamespacedKeyword.  Don't take a reference to
value_type_tag.

* Pre: Add repeat_values.

Requires itertools, so this commit is not stand-alone.

* Pre: Expose the first transaction ID as bootstrap::TX0.

This is handy for testing.

* Pre: Improve debug module.

* Pre: Bump rusqlite version for https://github.com/jgallagher/rusqlite/issues/211.

* Pre: Use itertools.

* Start implementing bulk SQL insertion algorithms. (#214)

This is slightly simpler re-expression of the existing Clojure
implementation.

* Post: Start generic data-driven transaction testing. (#188)

* Review comment: `use ::{SYMBOL}` instead of `use {SYMBOL}`.

* Review comment: Prefer bindings_per_statement to values_per_statement.
2017-02-08 14:04:32 -08:00
Nick Alexander
81af295948 Start installing SQL schema. (#171) r=rnewman
* Start installing the SQLite store and bootstrapping the datom store.

* Review comment: Decomplect V2_IDENTS.

* Review comment: Decomplect V2_PARTS.

* Review comment: Pre: Expose Clojure's merge on Value instances.

* Review comment: Decomplect V2_SYMBOLIC_SCHEMA.

* Review comment: Decomplect V1_STATEMENTS.

* Review comment: Prefer ? to try!.

* Review comment: Fix typos; format; add TODOs.

* Review comment: Assert that Mentat `Schema` is valid upon creation.

* Review comment: Improve conversion to and from SQL values.

This patch factors the fundamental SQL conversion maps
between (rusqlite::Value, value_type_tag) and (edn::Value, ValueType)
through a new Mentat TypedValue.  (A future patch might rename this
fundamental type mentat::Value.)

To make certain conversion functions infallible, I removed
placeholders for :db.type/{instant,uuid,uri}.  (We could panic
instead, but there's no need to do that right now.)

* Review comment: Always uses bundled SQLite in rusqlite.

This avoids (runtime) failures in Travis CI due to old SQLite
versions.  See 432966ac77.

* Review comment: Move semantics in `from_sql_value_pair`.

* Review comment: DB_EXCISE_BEFORE_T instead of ...BEFORET (no underscore).

* Review comment: Move overview notes to the Wiki.
2017-01-25 16:13:56 -08:00
Nick Alexander
b11b9b909c Add tx{-parser} crates; start parsing transactions. (#164) r=rnewman
This depends on edn and uses the combine parser combinator library.
2017-01-12 16:08:29 -08:00