This is a big deck-chair re-arrangement. This puts FindQuery into
query-algebrizer and puts the validation from ParsedFindQuery ->
FindQuery their as well.
Some tests were re-homed for this.
In addition, the little-used maplit crate dependency was replaced with
inline expressions.
`tx-ids` allows to enumerate transaction IDs efficiently.
`tx-data` allows to extract transaction log data efficiently.
We might eventually allow to filter by impacted attribute sets as well.
* Pre: use debugcli in VSCode.
* Pre: wrap subqueries in parentheses in output SQL.
* Pre: add ExistingColumn.
This lets us make reference to columns by name, rather than only
pointing to qualified aliases.
* Pre: add Into for &str to TypedValue.
* Pre: add Store.transact.
* Pre: cleanup.
* Parse and algebrize simple aggregates. (#312)
* Follow-up: print aggregate columns more neatly in the CLI.
* Useful ValueTypeSet helpers.
* Allow for entity inequalities.
* Add 'differ', which is a ref-specialized not-equals.
* Add 'unpermute', a function for getting unique, distinct pairs from bindings.
* Review comments.
* Add 'the' pseudo-aggregation operator.
This allows for a corresponding value to be returned when a query
includes one 'min' or 'max' aggregate.
This was done using the following shell script:
```
find . -type f -not -path "*target*" \
'(' -name '*.rs' -o -name '*.md' -o -name '*.toml' ')' -print0 | \
xargs -0 sed -i '' -E 's/[[:space:]]*$//'
```
Which is admittedly imperfect, but manages to hit everything that was a problem in this repo.
This version removes nalexander's lovely matrix code. It turned out
that scalar and tuple bindings are sufficiently different from coll
and rel -- they can directly apply as values in the query -- that
there was no point in jumping through hoops to turn those single
values into a matrix.
Furthermore, I've standardized us on a Vec<TypedValue>
representation for rectangular matrices, which should be much
more efficient, but would have required rewriting that code.
Finally, coll and rel are sufficiently different from each other
-- coll doesn't require processing nested collections -- that
my attempts to share code between them fell somewhat flat. I had
lots of nice ideas about zipping together cycles and such, but
ultimately I ended up with relatively straightforward, if a bit
repetitive, code.
The next commit will demonstrate the value of this work -- tests
that exercised scalar and tuple grounding now collapse down to
the simplest possible SQL.
* Part 1 - Parse `not` and `not-join`
* Part 2 - Validate `not` and `not-join` pre-algebrization
* Address review comments rnewman.
* Remove `WhereNotClause` and populate `NotJoin` with `WhereClause`.
* Fix validation for `not` and `not-join`, removing tests that were invalid.
* Address rustification comments.
* Rebase against `rust` branch.
* Part 3 - Add required types for NotJoin.
* Implement `PartialEq` for
`ConjoiningClauses` so `ComputedTable` can be included inside `ColumnConstraint::NotExists`
* Part 4 - Implement `apply_not_join`
* Part 5 - Call `apply_not_join` from inside `apply_clause`
* Part 6 - Translate `not-join` into `NOT EXISTS` SQL
* Address review comments.
* Rename `projected` to `unified` to better describe the fact that we are not projecting any variables.
* Check for presence of each unified var in either `column_bindings` or `input_bindings` and bail if not there.
* Copy over `input_bindings` for each var in `unified`.
* Only copy over the first `column_binding` for each variable in `unified` rather than the whole list.
* Update tests.
* Address review comments.
* Make output from Debug for NotExists more useful
* Clear up misunderstanding. Any single failing clause in the not will cause the entire not to be considered empty
* Address review comments.
* Remove Limit requirement from cc_to_exists.
* Use Entry.or_insert instead of matching on the entry to add to column_bindings.
* Move addition of value_bindings to before apply_clauses on template.
* Tidy up tests with some variable reuse.
* Addressed nits,
* Address review comments.
* Move addition of column_bindings to above apply_clause.
* Update tests.
* Add test to ensure that unbound vars fail
* Improve test for unbound variable to check for correct variable and error
* address nits
* Pre: put query parts in alphabetical order.
* Pre: rename 'input' to 'query' in translate tests.
* Part 1: parse :limit.
* Part 2: validate and escape variable parameters in SQL.
* Part 3: algebrize and translate limits.
This adds an `:order` keyword to `:find`.
If present, the results of the query will be an ordered set, rather than
an unordered set; rows will appear in an ordered defined by each
`:order` entry.
Each can be one of three things:
- A var, `?x`, meaning "order by ?x ascending".
- A pair, `(asc ?x)`, meaning "order by ?x ascending".
- A pair, `(desc ?x)`, meaning "order by ?x descending".
Values will be ordered in this sequence for asc, and in reverse for desc:
1. Entity IDs, in ascending numerical order.
2. Booleans, false then true.
3. Timestamps, in ascending numerical order.
4. Longs and doubles, intermixed, in ascending numerical order.
5. Strings, in ascending lexicographic order.
6. Keywords, in ascending lexicographic order, considering the entire
ns/name pair as a single string separated by '/'.
Subcommits:
Pre: make bound_value public.
Pre: generalize ErrorKind::UnboundVariable for use in order.
Part 1: parse (direction, var) pairs.
Part 2: parse :order clause into FindQuery.
Part 3: include order variables in algebrized query.
We add order variables to :with, so we can reuse its type tag projection
logic, and so that we can phrase ordering in terms of variables rather
than datoms columns.
Part 4: produce SQL for order clauses.
Part 1, core: use Rc for String and Keyword.
Part 2, query: use Rc for Variable.
Part 3, sql: use Rc for args in SQLiteQueryBuilder.
Part 4, query-algebrizer: use Rc.
Part 5, db: use Rc.
Part 6, query-parser: use Rc.
Part 7, query-projector: use Rc.
Part 8, query-translator: use Rc.
Part 9, top level: use Rc.
Part 10: intern Ident and IdentOrKeyword.