Simplify.
This has a watcher collect txid -> AttributeSet mappings each time a
transact occurs. On commit we retrieve those mappings and hand them over
to the observer service, which filters them and packages them up for
dispatch.
Tidy up
* Use fixed-size arrays for bootstrap datoms, not vecs.
* Wide-ranging cleanup.
This commit:
- Deletes some dead code.
- Marks some functions only used by tests as cfg(test).
- Adds pub(crate) to a bunch of functions.
- Cleans up a few other nits.
* Use the cache to make constant queries super fast.
* Fix translate tests to match: we no longer generate SQL for many of them!
* Accumulate additions and removals into the cache.
* Make attribute cache clone-on-write; store it in Metadata.
* Allow caching of fulltext attributes, interning strings.
This puts caching in mentat_db, adds a reverse lookup capability for
unique attributes, and populates bidirectional caches with a single
SQL cursor walk.
Differentiate between begin_read and begin_uncached_read.
Note that we still allow toggling within InProgress, because there might be
transient local state that makes starting a new transaction impossible.
You can use this in conjunction with setting SQLITE3_LIB_DIR to control which SQLite is used.
See https://github.com/jgallagher/rusqlite for more.
Also add recent contributors to the authors array.
* Nit: Alphabetical ordering of imports
* Create Cache and provide functions for calling it
* Get tests working. Move to using NamespacedKeyword over KnownEntid in function signature
* Add is_cached check to caching tests
* Move lazy and add/remove boolean flags to enums
* Move function definitions into generic trait and implement trait for AttributeCache
* Remove lazy cache and generalize cache
* Update tests
* Eager cache becomes simple key value store. AttributeMap handles attribute storing specifics
* Update tests to test presence of correct values in cache
* Move EagerCache, AttributeValueProvider and ValueProvider into mentat_db
* Add test for get_for_entid
* Add test for lookup attribute
* Make caches cloneable. Add value_for alongside values_for
* Use cache in attribute lookups
* Split test for values and value and add cardinality
* address review feedback r=rnewman
Also move `now` into core, implement microsecond truncation.
This is so we don't return a more granular -- and thus subtly different --
timestamp in a `TxReport` than we put into the store.
This includes two other changes:
* Split transact to expose an interface for TermWithTempIds.
* Return TxReport from each InProgress operation, not from commit.
Improve naming of read-only transactions.
Implement entid_for_type.
Simplify get_attribute.
Name ignored var in algebrizer.
Comment attribute_for_ident.
Make KnownEntid a core concept.
Expose lookup_value_for_attribute.
Implement HasSchema and a new query encapsulation on Conn.
Pre: export Queryable.
Pre: export AttributeBuilder from mentat_db.
Pre: fix module-level comment for tx/src/entities.rs.
Pre: rename some `to_` conversions to `into_`.
Pre: make AttributeBuilder::unique less verbose.
Pre: split out a HasSchema trait to abstract over Schema.
Pre: rename SchemaMap/schema_map to AttributeMap/attribute_map.
Pre: TypedValue/NamespacedKeyword conversions.
Pre: turn Unique and ValueType into TypedValue::Keyword.
Pre: export IntoResult.
Pre: export NamespacedKeyword from mentat_core.
Pre: use intern_set in tx.
Pre: add InternSet::len.
Pre: comment gardening.
Pre: remove inaccurate TODO from TxReport comment.
This was done using the following shell script:
```
find . -type f -not -path "*target*" \
'(' -name '*.rs' -o -name '*.md' -o -name '*.toml' ')' -print0 | \
xargs -0 sed -i '' -E 's/[[:space:]]*$//'
```
Which is admittedly imperfect, but manages to hit everything that was a problem in this repo.
* Pre: rename begin_transaction to begin_tx_application.
* Take an EXCLUSIVE transaction when bootstrapping, and an IMMEDIATE transaction when writing.
This avoids the remote possibility of another write sneaking in the door
while we're preparing to write, avoids us needing to upgrade locks, etc.
After a BEGIN IMMEDIATE, no other database connection will be able to write
to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes
can continue to read from the database, however.
An exclusive transaction causes EXCLUSIVE locks to be acquired on all
databases. After a BEGIN EXCLUSIVE, no other database connection except for
read_uncommitted connections will be able to read the database and no other
connection without exception will be able to write the database until the
transaction is complete.
* Hacky implementation of atomic multi-tx.
* Hold the last report, returning the InProgress from each operation.
* Rewrite transact in terms of InProgress.
* Test rollback.
* Remove unused imports.
* Don't use Rc for transaction reports.
* Pre: break out USER0 as a part boundary constant.
* Export TX0 and USER0 from mentat_db. This is for testing.
* Review comments: commenting.
* Test tempid allocation and rollback.
* Create mentat command line.
* Create tools directory containing new crate for mentat_cli.
* Add simple cli with mentat prompt.
* Remove rustc-serialize dependency
* Open DB inside CLI (#452) (#463)
* Open named database OR default to in memory database if no name provided
Rearrange workspace to allow import of mentat crate in cli crate
Create store object inside repl when started for connecting to mentat
Use provided DB name to open connection in store
Accept DB name as command line arg.
Open on CLI start
Implement '.open' command to open desired DB from inside CLI
* Implement Close command to close current DB.
* Closes existing open db and opens new in memory db
* Review comment: Use `combine` to parse arguments.
Move over to using Result rather than enums with err
* Accept and parse EDN Query and Transact commands (#453) (#465)
* Parse query and transact commands
* Implement is_complete for transactions and queries
* Improve query parser. Am still not happy with it though.
There must be some way that I can retain the eof() after the `then` that means I don't have to move the skip on spaces and eof
Make in process command storing clearer.
Add comments around in process commands.
Add alternative commands for transact/t and query/q
* Address review comments r=nalexander.
* Bump rust version number.
* Use `bail` when throwing errors.
* Improve edn parser.
* Remove references to unused `more` flag.
* Improve naming of query and transact commands.
* Send queries and transactions to mentat and output the results (#466)
* Send queries and transactions to mentat and output the results
move outputting query and transaction results out of store and into repl
* Add query and transact commands to help
* Execute queries and transacts passed in at startup
* Address review comments =nalexander.
* Bump rust version number.
* Use `bail` when throwing errors.
* Improve edn parser.
* Remove references to unused `more` flag.
* Improve naming of query and transact commands.
* Execute command line args in order
* Addressing rebase issues
* Exit CLI (#457) (#484) r-rnewman
* Implement exit command for cli tool
* Address review comments r=rnewman
* Include exit commands in help
* Show schema of current DB (#487)
* Fixing rebase issues
* addressing nit
* Match updated dependencies on CLI crate and remove unused import
* Update some dependencies.
* Update rusqlite to 0.12.
* Update error-chain to a forked version that implements Sync.
* Fix some compiler warnings.
* Remove unused imports in tests.
* Parse errors no longer naturally print with the expected symbol.
This commit adds a check to the partition map that a provided entity ID
has been mentioned (i.e., is present in the start:index range of one of
our partitions).
We introduce a newtype for known entity IDs, using this internally in
the tx expander to track user-provided entids that have passed the above
check (and IDs that we allocate as part of tempid processing). This
newtype is stripped prior to tx assertion.
In order that DB tests can continue to write
[:db/add 111 :foo/bar 222]
we add an additional fake partition to our test connections, ranging
from 100 to 1000.
This is an optimization that trades rejecting inputs earlier at the
cost of expressive error messages. It should be possible to recover
the error messages, however.
This will reject input like `[:db/{add,retract} v :attribute/_reversed NOT-AN-ENTITY]`.
There are two broad approaches:
1) Handle reverse attribute notation dynamically, in the style that
Datomic does. This is the most flexible, but it's not a good fit
given that we produce strongly typed output from the parser.
Strongly typed input to the transactor has had many benefits, so I
don't want to roll it back for a relatively unimportant feature
like reverse notation -- especially not since Mentat does not
require :db.install/_attribute to modify schema attributes.
2) Handle reverse attribute in the parser itself, so that we can
produce strongly typed parser output while restricting the input.
I implemented this first and discovered that it's very difficult to
give sensible error messages in common cases.
In any case, the bulk of the code is the same between the two
approaches, and I wrote the tests for the dynamic version (with error
output), so that's what I'm rolling with.
This patch preserves the existing indentation, to highlight the
differences. The next patch will indent.
This is a big commit, but it breaks into two conceptual pieces. The
first is to "parse without copying". We replace a stream of an owned
collection of edn::ValueAndSpan and instead have a stream of a
borrowed collection of &edn::ValueAndSpan references. (Generally,
this is represented as an iterator over a slice, but it can be over
other things too.) Cloning such iterators is constant time, which
improves on cloning an owned collection of edn::ValueAndSpan, which is
linear time in the length of the collection and additional time
depending on the complexity of the EDN values.
The second conceptual piece is to parse keyword maps using a special
parser and a macro to build the parser implementations. Before, we
created a new edn::ValueAndSpan::Map to represent a keyword map in
vector form; since we're working with &edn::ValueAndSpan references
now, we can't create an &edn::ValueAndSpan reference with an
appropriate lifetime. Therefore we generalize the concept of
iteration slightly and turn keyword maps in map form into linear
iterators by flattening the value maps. This is a potentially
obscuring transformation, so we have to take care to protect against
some failure cases. (See the comments and the tests in the code.)
After these changes, parsing using `combine` is linear time (and
reasonably fast).
* Pre: unused import in translate.rs.
* Part 2: take a dependency on rusqlite for query arguments.
* Part 1: flatten V2 schema into V1. Add UUID and URI.
Bump expected ident and bootstrap datom count in tests.
* Part 5: parse edn::Value::Uuid.
* Part 3: extend ValueType and TypedValue to include Uuid.
* Part 4: add Uuid to query arguments.
* Part 6: extend db to support Uuid.
* Part 8: add a tx-parser test for #f NaN and #uuid.
* Part 7: parse and algebrize UUIDs in queries.
* Part 1: parse #inst in EDN and throughout query engine.
* Part 3: handle instants in db.
* Part 2: instants never matches integers in queries.
* Part 4: use DateTime for tx_instants.
* Add a test for adding and querying UUIDs and instants.
* Review comments.