This commit lifts some logic out of the scalar ground handler to apply
elsewhere.
When a new value binding is encountered for a variable to which column
bindings have already been established, we do two things:
- We apply a new constraint to the primary column. This ensures that the
behavior for ground-first and ground-second is equivalent.
- We eliminate any existing column type extraction: it won't be
necessary now that a constant value and constant type are known.
This version removes nalexander's lovely matrix code. It turned out
that scalar and tuple bindings are sufficiently different from coll
and rel -- they can directly apply as values in the query -- that
there was no point in jumping through hoops to turn those single
values into a matrix.
Furthermore, I've standardized us on a Vec<TypedValue>
representation for rectangular matrices, which should be much
more efficient, but would have required rewriting that code.
Finally, coll and rel are sufficiently different from each other
-- coll doesn't require processing nested collections -- that
my attempts to share code between them fell somewhat flat. I had
lots of nice ideas about zipping together cycles and such, but
ultimately I ended up with relatively straightforward, if a bit
repetitive, code.
The next commit will demonstrate the value of this work -- tests
that exercised scalar and tuple grounding now collapse down to
the simplest possible SQL.
Datomic accepts mostly-arbitrary EDN, and it is actually used: for
example, the following are all valid, and all mean different things:
* `(ground 1 ?x)`
* `(ground [1 2 3] [?x ?y ?z])`
* `(ground [[1 2 3] [4 5 6]] [[?x ?y ?z]])`
We could probably introduce new syntax that expresses these patterns
while avoiding collection arguments, but I don't see one right now.
I've elected to support only vectors for simplicity; I'm hoping to
avoid parsing edn::Value in the query-algebrizer.
This commit adds a check to the partition map that a provided entity ID
has been mentioned (i.e., is present in the start:index range of one of
our partitions).
We introduce a newtype for known entity IDs, using this internally in
the tx expander to track user-provided entids that have passed the above
check (and IDs that we allocate as part of tempid processing). This
newtype is stripped prior to tx assertion.
In order that DB tests can continue to write
[:db/add 111 :foo/bar 222]
we add an additional fake partition to our test connections, ranging
from 100 to 1000.
This is an optimization that trades rejecting inputs earlier at the
cost of expressive error messages. It should be possible to recover
the error messages, however.
This will reject input like `[:db/{add,retract} v :attribute/_reversed NOT-AN-ENTITY]`.
There are two broad approaches:
1) Handle reverse attribute notation dynamically, in the style that
Datomic does. This is the most flexible, but it's not a good fit
given that we produce strongly typed output from the parser.
Strongly typed input to the transactor has had many benefits, so I
don't want to roll it back for a relatively unimportant feature
like reverse notation -- especially not since Mentat does not
require :db.install/_attribute to modify schema attributes.
2) Handle reverse attribute in the parser itself, so that we can
produce strongly typed parser output while restricting the input.
I implemented this first and discovered that it's very difficult to
give sensible error messages in common cases.
In any case, the bulk of the code is the same between the two
approaches, and I wrote the tests for the dynamic version (with error
output), so that's what I'm rolling with.
This patch preserves the existing indentation, to highlight the
differences. The next patch will indent.
This isn't perfect -- we still need to clone in a couple of cases -- but it avoids us
passing duplicate strings down into SQLite whenever the same value is mentioned more
than once in a query.
This is a big commit, but it breaks into two conceptual pieces. The
first is to "parse without copying". We replace a stream of an owned
collection of edn::ValueAndSpan and instead have a stream of a
borrowed collection of &edn::ValueAndSpan references. (Generally,
this is represented as an iterator over a slice, but it can be over
other things too.) Cloning such iterators is constant time, which
improves on cloning an owned collection of edn::ValueAndSpan, which is
linear time in the length of the collection and additional time
depending on the complexity of the EDN values.
The second conceptual piece is to parse keyword maps using a special
parser and a macro to build the parser implementations. Before, we
created a new edn::ValueAndSpan::Map to represent a keyword map in
vector form; since we're working with &edn::ValueAndSpan references
now, we can't create an &edn::ValueAndSpan reference with an
appropriate lifetime. Therefore we generalize the concept of
iteration slightly and turn keyword maps in map form into linear
iterators by flattening the value maps. This is a potentially
obscuring transformation, so we have to take care to protect against
some failure cases. (See the comments and the tests in the code.)
After these changes, parsing using `combine` is linear time (and
reasonably fast).