* Pre: put query parts in alphabetical order.
* Pre: rename 'input' to 'query' in translate tests.
* Part 1: parse :limit.
* Part 2: validate and escape variable parameters in SQL.
* Part 3: algebrize and translate limits.
This is for two reasons.
Firstly, we need to track the types of inputs, their values, and also
the input variables; adding a struct gives us a little more clarity.
Secondly, when we come to implement prepared statements, we'll be
algebrizing queries without having the values available. We'll be able
to do a better job of algebrizing, and also do more validating, if we
allow callers to specify the types of variables in advance, even if the
values aren't known.
We also at this point switch from using `Vec<Variable>` to
`BTreeSet<Variable>`. This allows us to guarantee no duplicates later;
we'll reject duplicates at parse time.
This adds an `:order` keyword to `:find`.
If present, the results of the query will be an ordered set, rather than
an unordered set; rows will appear in an ordered defined by each
`:order` entry.
Each can be one of three things:
- A var, `?x`, meaning "order by ?x ascending".
- A pair, `(asc ?x)`, meaning "order by ?x ascending".
- A pair, `(desc ?x)`, meaning "order by ?x descending".
Values will be ordered in this sequence for asc, and in reverse for desc:
1. Entity IDs, in ascending numerical order.
2. Booleans, false then true.
3. Timestamps, in ascending numerical order.
4. Longs and doubles, intermixed, in ascending numerical order.
5. Strings, in ascending lexicographic order.
6. Keywords, in ascending lexicographic order, considering the entire
ns/name pair as a single string separated by '/'.
Subcommits:
Pre: make bound_value public.
Pre: generalize ErrorKind::UnboundVariable for use in order.
Part 1: parse (direction, var) pairs.
Part 2: parse :order clause into FindQuery.
Part 3: include order variables in algebrized query.
We add order variables to :with, so we can reuse its type tag projection
logic, and so that we can phrase ordering in terms of variables rather
than datoms columns.
Part 4: produce SQL for order clauses.
* Pre: refactor projector code.
* Part 1: maintain 'with' variables in AlgebrizedQuery.
* Part 2: include necessary 'with' variables in SQL projection list.
The test produces projection elements for `:with`, even though there are
no aggregates in the query. This test will need to be adjusted when we
optimize this away!
This commit turns complex `or` -- `or`s in which not all variables are
unified, or in which not all arms are the same shape -- into a
computed table.
We do this by building a template CC that shares some state with the
destination CC, applying each arm of the `or` to a copy of the template
as if it were a standalone query, then building a projection list and
creating a `ComputedTable::Union`. This is pushed into the destination
CC's `computed_tables` list.
Finally, the variables projected from the UNION are bound in the
destination CC, so that unification occurs, and projection of the
outermost query can use bindings established by the `or-join`.
This commit includes projection of type codes from heterogeneous `UNION`
arms: we compute a list of variables for which a definite type is
unknown in at least one arm, and force all arms to project either a type
tag column or a fixed type. It's important that each branch of a UNION
project the same columns in the same order, hence the projection of
fixed values.
The translator is similarly extended to project the type tag column name
or the known value_type_tag to support this.
Review comment: clarify union type extraction.
This commit:
- Defines a new kind of column, distinct from the eavt columns in
`DatomsColumn`, to model the rows projected from subqueries. These
always name one of two things: a variable, or a variable's type tag.
Naturally the two cases are thus `Variable` and `VariableTypeTag`.
These are cheap to clone, given that `Variable` is an `Rc<String>`.
- Defines `Column` as a wrapper around `DatomsColumn` and
`VariableColumn`. Everywhere we used to use `DatomsColumn` we now
allow `Column`: particularly in constraints and projections.
- Broadens the definition of a table list in the intermediate
"query-sql" representation to include a SQL UNION. A UNION is
represented as a list of queries and an alias.
- Implements translation from a `ComputedTable` to the query-sql
representation. In this commit we only project vars, not type tags.
Review comment: discuss bind_column_to_var for ValueTypeTag.
Review comment: implement From<Vec<T>> for ConsumableVec<T>.
Complex `or`s are translated to SQL as a subquery -- in particular, a
subquery that's a UNION. Conceptually, that subquery is a computed
table: `all_datoms` and `datoms` yield rows of e/a/v/tx, and each
computed table yields rows of variable bindings.
The table itself is a type, `ComputedTable`. Its `Union` case contains
everything a subquery needs: a `ConjoiningClauses` and a projection
list, which together allow us to build a SQL subquery, and a list of
variables that need type code extraction. (This is discussed further in
a later commit.)
Naturally we also need a way to refer to columns in a computed table.
We model this by a new enum case in `DatomsTable`, `Computed`, which
maintains an integer value that uniquely identifies a computed table.
When we started expanding and narrowing type sets, it became impossible
to conclusively know during pattern application whether a type was
known. We now figure that out at the end: if a variable has only a
single known type, we don't need to extract its type tag.
* Pre: Expose more in edn.
* Pre: Make it easier to work with ValueAndSpan.
with_spans() is a temporary hack, needed only because I don't care to
parse the bootstrap assertions from text right now.
* Part 1a: Add `value_and_span` for parsing nested `edn::ValueAndSpan` instances.
I wasn't able to abstract over `edn::Value` and `edn::ValueAndSpan`;
there are multiple obstacles. I chose to roll with
`edn::ValueAndSpan` since it exposes the additional span information
that we will want to form good error messages in the future.
* Part 1b: Add keyword_map() parsing an `edn::Value::Vector` into an `edn::Value::map`.
* Part 1c: Add `Log`/`.log(...)` for logging parser progress.
This is a terrible hack, but it sure helps to debug complicated nested
parsers. I don't even know what a principled approach would look
like; since our parser combinators are so frequently expressed in
code, it's hard to imagine a data-driven interpreter that can help
debug things.
* Part 2: Use `value_and_span` apparatus in tx-parser/.
I break an abstraction boundary by returning a value column
`edn::ValueAndSpan` rather than just an `edn::Value`. That is, the
transaction processor shouldn't care where the `edn::Value` it is
processing arose -- even we care to track that information we should
bake it into the `Entity` type. We do this because we need to
dynamically parse the value column to support nested maps, and parsing
requires a full `edn::ValueAndSpan`. Alternately, we could cheat and
fake the spans when parsing nested maps, but that's potentially
expensive.
* Part 3: Use `value_and_span` apparatus in query-parser/.
* Part 4: Use `value_and_span` apparatus in root crate.
* Review comment: Make Span and SpanPosition Copy.
* Review comment: nits.
* Review comment: Make `or` be `or_exactly`.
I baked the eof checking directly into the parser, rather than using
the skip and eof parsers. I also took the time to restore some tests
that were mistakenly commented out.
* Review comment: Extract and use def_matches_* macros.
* Review comment: .map() as late as possible.
Part 1, core: use Rc for String and Keyword.
Part 2, query: use Rc for Variable.
Part 3, sql: use Rc for args in SQLiteQueryBuilder.
Part 4, query-algebrizer: use Rc.
Part 5, db: use Rc.
Part 6, query-parser: use Rc.
Part 7, query-projector: use Rc.
Part 8, query-translator: use Rc.
Part 9, top level: use Rc.
Part 10: intern Ident and IdentOrKeyword.
mod.rs defines the module and ConjoiningClauses itself, complete with
methods to record facts and ask it questions.
pattern.rs, predicate.rs, resolve.rs, and or.rs include particular
functionality around accumulating certain kinds of patterns.
Only `or.rs` includes significant new code; the rest is just split.