* Add a failing test for EDN parsing '…'.
* Expose a SQLValueType trait to get value_type_tag values out of a ValueType.
* Add accessors to FindSpec.
* Implement querying.
* Implement rudimentary projection.
* Export mentat_db::new_connection.
* Export symbols from mentat.
* Add rudimentary end-to-end query tests.
* Add top-level `Conn`. Fixes#296.
This is a little different than the API rnewman and I originally
discussed in https://public.etherpad-mozilla.org/p/db-conn-thoughts.
A few notes:
- I was led to make a `Schema` instance the thing that is shared,
rather than a `db::DB`. It's possible that queries will want to
know the current transaction at some point (to prevent races, or to
query historical data), but that can be a future consideration.
- The generation number just allows for a cheap comparison. I don't
care to handle races to transact just yet; the long term plan might
be to make embedding applications responsible for avoiding races, or
we might handle queuing transactions and yielding report futures in
Mentat itself.
- The sharing of the partition maps is a little more subtle than
expected. Partition maps are volatile: a successful Mentat
transaction always advances the :db.part/tx partition, so it's not
worth passing references around. This means that consumers must
clone in order to maintain just a single clone per transaction.
Clean some cruft.
* Review comments.
* Pre: Drop unneeded tx0 from search results.
* Pre: Don't require a schema in some of the DB code.
The idea is to separate the transaction applying code, which is
schema-aware, from the concrete storage code, which is just concerned
with getting bits onto disk.
* Pre: Only reference Schema, not DB, in debug module.
This is part of a larger separation of the volatile PartitionMap,
which is modified every transaction, from the stable Schema, which is
infrequently modified.
* Pre: Fix indentation.
* Extract part of DB to new SchemaTypeChecking trait.
* Extract part of DB to new PartitionMapping trait.
* Pre: Don't expect :db.part/tx partition to advance when tx fails.
This fails right now, because we allocate tx IDs even when we shouldn't.
* Sketch a db interface without DB.
* Add ValueParseError; use error-chain in tx-parser.
This can be simplified when
https://github.com/Marwes/combine/issues/86 makes it to a published
release, but this unblocks us for now. This converts the `combine`
error type `ParseError<&'a [edn::Value]>` to a type with owned
`Vec<edn::Value>` collections, re-using `edn::Value::Vector` for
making them `Display`.
* Pre: Accept Borrow<Schema> instead of just &Schema in debug module.
This makes it easy to use Rc<Schema> or Arc<Schema> without inserting
&* sigils throughout the code.
* Use error-chain in query-parser.
There are a few things to point out here:
- the fine grained error types have been flattened into one crate-wide
error type; it's pretty easy to regain the granularity as needed.
- edn::ParseError is automatically lifted to
mentat_query_parser::errors::Error;
- we use mentat_parser_utils::ValueParser to maintain parsing error
information from `combine`.
* Patch up top-level.
* Review comment: Only `borrow()` once.
* Leave a pointer to issue 288.
* Re-export mentat_db::types::DB from mentat_db.
* Parse EDN strings in the query parser.
* Export 'public' API from mentat_query_parser's top level.
* Stub out mentat::q_once.
* Test the mentat_query directory on Travis.
* Export common types from edn.
This allows you to write
use edn::{PlainSymbol,Keyword};
instead of
use edn:🔣:{PlainSymbol,Keyword};
* Add an edn::Value::is_keyword predicate.
* Clean up query, preparing for query-parser.
* Make EDN keywords and symbols take Into<String> arguments.
* Implement parsing of simple :find lists.
* Rustfmt query-parser. Split find and query.
* Review comment: values_to_variables now returns a NotAVariableError on failure.
* Review comment: rename gimme to to_parsed_value.
* Review comment: add comments.
Starting to work out the project layout for sub-crates. The crate inside query-parser/ is "datomish-query-parser" and the core code in src/ depends on it.
This allows for code to run before and after a schema fragment is
added for the first time.
The anticipated use for this is twofold:
1. To do initial setup, e.g., defining global entities.
2. To 'adopt' unmanaged attributes already defined in the store.
This 'pre' would manually alter or retract attributes so that the
transact of the new schema datoms can complete.
For example, if properties :foo/bar and :foo/baz will be unchanged,
but :noo/zob needs to change from a string to an integer, the :none
pre-function can alter the ident, and the :none post-function can
migrate and clean up.
This generalizes the transactor loop to allow callers to run
an arbitrary function within an `in-transaction!` body.
Combined with exposing `<report-transact-tx-data!`, this allows
an admittedly sophisticated consumer to conditionally query and
transact in a consistent way -- for example, cleaning up inconsistent
data then transacting a new schema version.
Altering uniqueness and cardinality attributes works, with the exception
of enabling uniqueness from nothing.
:db/noHistory and :db/isComponent changes are implemented but untested,
and aren't really supported by Datomish anyway.
The metaphor we use is that of "evolution", where each "evolutionary
step" contains a number of different "generations". Entities in the
process of being resolved are increasingly "evolved" into simpler
generations, until no further evolution is possible.