This is a big commit, but it breaks into two conceptual pieces. The
first is to "parse without copying". We replace a stream of an owned
collection of edn::ValueAndSpan and instead have a stream of a
borrowed collection of &edn::ValueAndSpan references. (Generally,
this is represented as an iterator over a slice, but it can be over
other things too.) Cloning such iterators is constant time, which
improves on cloning an owned collection of edn::ValueAndSpan, which is
linear time in the length of the collection and additional time
depending on the complexity of the EDN values.
The second conceptual piece is to parse keyword maps using a special
parser and a macro to build the parser implementations. Before, we
created a new edn::ValueAndSpan::Map to represent a keyword map in
vector form; since we're working with &edn::ValueAndSpan references
now, we can't create an &edn::ValueAndSpan reference with an
appropriate lifetime. Therefore we generalize the concept of
iteration slightly and turn keyword maps in map form into linear
iterators by flattening the value maps. This is a potentially
obscuring transformation, so we have to take care to protect against
some failure cases. (See the comments and the tests in the code.)
After these changes, parsing using `combine` is linear time (and
reasonably fast).
This doesn't yet introduce a working Cargo.toml for 'mentatweb', but it
does allow RLS to build correctly without errors, and it reduces the
core library's dependency space, which is more important in the short
term.
* Pre: unused import in translate.rs.
* Part 2: take a dependency on rusqlite for query arguments.
* Part 1: flatten V2 schema into V1. Add UUID and URI.
Bump expected ident and bootstrap datom count in tests.
* Part 5: parse edn::Value::Uuid.
* Part 3: extend ValueType and TypedValue to include Uuid.
* Part 4: add Uuid to query arguments.
* Part 6: extend db to support Uuid.
* Part 8: add a tx-parser test for #f NaN and #uuid.
* Part 7: parse and algebrize UUIDs in queries.
* Part 1: parse #inst in EDN and throughout query engine.
* Part 3: handle instants in db.
* Part 2: instants never matches integers in queries.
* Part 4: use DateTime for tx_instants.
* Add a test for adding and querying UUIDs and instants.
* Review comments.
* Part 1 - Parse `not` and `not-join`
* Part 2 - Validate `not` and `not-join` pre-algebrization
* Address review comments rnewman.
* Remove `WhereNotClause` and populate `NotJoin` with `WhereClause`.
* Fix validation for `not` and `not-join`, removing tests that were invalid.
* Address rustification comments.
* Rebase against `rust` branch.
* Part 3 - Add required types for NotJoin.
* Implement `PartialEq` for
`ConjoiningClauses` so `ComputedTable` can be included inside `ColumnConstraint::NotExists`
* Part 4 - Implement `apply_not_join`
* Part 5 - Call `apply_not_join` from inside `apply_clause`
* Part 6 - Translate `not-join` into `NOT EXISTS` SQL
* Address review comments.
* Rename `projected` to `unified` to better describe the fact that we are not projecting any variables.
* Check for presence of each unified var in either `column_bindings` or `input_bindings` and bail if not there.
* Copy over `input_bindings` for each var in `unified`.
* Only copy over the first `column_binding` for each variable in `unified` rather than the whole list.
* Update tests.
* Address review comments.
* Make output from Debug for NotExists more useful
* Clear up misunderstanding. Any single failing clause in the not will cause the entire not to be considered empty
* Address review comments.
* Remove Limit requirement from cc_to_exists.
* Use Entry.or_insert instead of matching on the entry to add to column_bindings.
* Move addition of value_bindings to before apply_clauses on template.
* Tidy up tests with some variable reuse.
* Addressed nits,
* Address review comments.
* Move addition of column_bindings to above apply_clause.
* Update tests.
* Add test to ensure that unbound vars fail
* Improve test for unbound variable to check for correct variable and error
* address nits
* Part 1: define ValueTypeSet.
We're going to use this instead of `HashSet<ValueType>` so that we can clearly express
the empty set and the set of all types, and also to encapsulate a switch to `EnumSet`."
* Part 2: use ValueTypeSet.
* Part 3: fix type expansion.
* Part 4: add a test for type extraction from nested `or`.
* Review comments.
* Review comments: simplify ValueTypeSet.
* Pre: put query parts in alphabetical order.
* Pre: rename 'input' to 'query' in translate tests.
* Part 1: parse :limit.
* Part 2: validate and escape variable parameters in SQL.
* Part 3: algebrize and translate limits.
This is for two reasons.
Firstly, we need to track the types of inputs, their values, and also
the input variables; adding a struct gives us a little more clarity.
Secondly, when we come to implement prepared statements, we'll be
algebrizing queries without having the values available. We'll be able
to do a better job of algebrizing, and also do more validating, if we
allow callers to specify the types of variables in advance, even if the
values aren't known.
We also at this point switch from using `Vec<Variable>` to
`BTreeSet<Variable>`. This allows us to guarantee no duplicates later;
we'll reject duplicates at parse time.
This adds an `:order` keyword to `:find`.
If present, the results of the query will be an ordered set, rather than
an unordered set; rows will appear in an ordered defined by each
`:order` entry.
Each can be one of three things:
- A var, `?x`, meaning "order by ?x ascending".
- A pair, `(asc ?x)`, meaning "order by ?x ascending".
- A pair, `(desc ?x)`, meaning "order by ?x descending".
Values will be ordered in this sequence for asc, and in reverse for desc:
1. Entity IDs, in ascending numerical order.
2. Booleans, false then true.
3. Timestamps, in ascending numerical order.
4. Longs and doubles, intermixed, in ascending numerical order.
5. Strings, in ascending lexicographic order.
6. Keywords, in ascending lexicographic order, considering the entire
ns/name pair as a single string separated by '/'.
Subcommits:
Pre: make bound_value public.
Pre: generalize ErrorKind::UnboundVariable for use in order.
Part 1: parse (direction, var) pairs.
Part 2: parse :order clause into FindQuery.
Part 3: include order variables in algebrized query.
We add order variables to :with, so we can reuse its type tag projection
logic, and so that we can phrase ordering in terms of variables rather
than datoms columns.
Part 4: produce SQL for order clauses.
* Pre: refactor projector code.
* Part 1: maintain 'with' variables in AlgebrizedQuery.
* Part 2: include necessary 'with' variables in SQL projection list.
The test produces projection elements for `:with`, even though there are
no aggregates in the query. This test will need to be adjusted when we
optimize this away!
This commit turns complex `or` -- `or`s in which not all variables are
unified, or in which not all arms are the same shape -- into a
computed table.
We do this by building a template CC that shares some state with the
destination CC, applying each arm of the `or` to a copy of the template
as if it were a standalone query, then building a projection list and
creating a `ComputedTable::Union`. This is pushed into the destination
CC's `computed_tables` list.
Finally, the variables projected from the UNION are bound in the
destination CC, so that unification occurs, and projection of the
outermost query can use bindings established by the `or-join`.
This commit includes projection of type codes from heterogeneous `UNION`
arms: we compute a list of variables for which a definite type is
unknown in at least one arm, and force all arms to project either a type
tag column or a fixed type. It's important that each branch of a UNION
project the same columns in the same order, hence the projection of
fixed values.
The translator is similarly extended to project the type tag column name
or the known value_type_tag to support this.
Review comment: clarify union type extraction.
This commit:
- Defines a new kind of column, distinct from the eavt columns in
`DatomsColumn`, to model the rows projected from subqueries. These
always name one of two things: a variable, or a variable's type tag.
Naturally the two cases are thus `Variable` and `VariableTypeTag`.
These are cheap to clone, given that `Variable` is an `Rc<String>`.
- Defines `Column` as a wrapper around `DatomsColumn` and
`VariableColumn`. Everywhere we used to use `DatomsColumn` we now
allow `Column`: particularly in constraints and projections.
- Broadens the definition of a table list in the intermediate
"query-sql" representation to include a SQL UNION. A UNION is
represented as a list of queries and an alias.
- Implements translation from a `ComputedTable` to the query-sql
representation. In this commit we only project vars, not type tags.
Review comment: discuss bind_column_to_var for ValueTypeTag.
Review comment: implement From<Vec<T>> for ConsumableVec<T>.
Complex `or`s are translated to SQL as a subquery -- in particular, a
subquery that's a UNION. Conceptually, that subquery is a computed
table: `all_datoms` and `datoms` yield rows of e/a/v/tx, and each
computed table yields rows of variable bindings.
The table itself is a type, `ComputedTable`. Its `Union` case contains
everything a subquery needs: a `ConjoiningClauses` and a projection
list, which together allow us to build a SQL subquery, and a list of
variables that need type code extraction. (This is discussed further in
a later commit.)
Naturally we also need a way to refer to columns in a computed table.
We model this by a new enum case in `DatomsTable`, `Computed`, which
maintains an integer value that uniquely identifies a computed table.
When we started expanding and narrowing type sets, it became impossible
to conclusively know during pattern application whether a type was
known. We now figure that out at the end: if a variable has only a
single known type, we don't need to extract its type tag.
* Pre: Expose more in edn.
* Pre: Make it easier to work with ValueAndSpan.
with_spans() is a temporary hack, needed only because I don't care to
parse the bootstrap assertions from text right now.
* Part 1a: Add `value_and_span` for parsing nested `edn::ValueAndSpan` instances.
I wasn't able to abstract over `edn::Value` and `edn::ValueAndSpan`;
there are multiple obstacles. I chose to roll with
`edn::ValueAndSpan` since it exposes the additional span information
that we will want to form good error messages in the future.
* Part 1b: Add keyword_map() parsing an `edn::Value::Vector` into an `edn::Value::map`.
* Part 1c: Add `Log`/`.log(...)` for logging parser progress.
This is a terrible hack, but it sure helps to debug complicated nested
parsers. I don't even know what a principled approach would look
like; since our parser combinators are so frequently expressed in
code, it's hard to imagine a data-driven interpreter that can help
debug things.
* Part 2: Use `value_and_span` apparatus in tx-parser/.
I break an abstraction boundary by returning a value column
`edn::ValueAndSpan` rather than just an `edn::Value`. That is, the
transaction processor shouldn't care where the `edn::Value` it is
processing arose -- even we care to track that information we should
bake it into the `Entity` type. We do this because we need to
dynamically parse the value column to support nested maps, and parsing
requires a full `edn::ValueAndSpan`. Alternately, we could cheat and
fake the spans when parsing nested maps, but that's potentially
expensive.
* Part 3: Use `value_and_span` apparatus in query-parser/.
* Part 4: Use `value_and_span` apparatus in root crate.
* Review comment: Make Span and SpanPosition Copy.
* Review comment: nits.
* Review comment: Make `or` be `or_exactly`.
I baked the eof checking directly into the parser, rather than using
the skip and eof parsers. I also took the time to restore some tests
that were mistakenly commented out.
* Review comment: Extract and use def_matches_* macros.
* Review comment: .map() as late as possible.