Commit graph

331 commits

Author SHA1 Message Date
Nick Alexander d4166cc67c Part 6: Remove query-parser entirely. 2018-06-04 15:04:39 -07:00
Nick Alexander 47441f56dc Part 5: Push FindQuery into query-algebrizer; structure errors.
This is a big deck-chair re-arrangement.  This puts FindQuery into
query-algebrizer and puts the validation from ParsedFindQuery ->
FindQuery their as well.

Some tests were re-homed for this.

In addition, the little-used maplit crate dependency was replaced with
inline expressions.
2018-06-04 15:04:39 -07:00
Nick Alexander 47a0f40cce Pre: Fix warnings. 2018-06-04 14:52:51 -07:00
Richard Newman b2e98f44f6
Generalize Entity by value type. (#701) (#691) r=rnewman
* Part 3: Parameterize Entity by value type.

This isn't quite right, because after parsing, we shouldn't care
about` `edn::ValueAndSpan`, we should care only about edn::Value.
However, I think we can drop `ValueAndSpan` entirely if we just use
`rust-peg` (and its simpler error messages) rather than a mix of
`rust-peg` and `combine`.

In any case, this paves the way to transacting `Entity<TypedValue>`,
which is a nice step towards building general entities.

* Part 1: Add AttributePlace.

* Part 2: Name other places EntityPlace and ValuePlace.

Now we're consistent and closer to self-documenting.  Both matter more
as we expose `Entity` as the thing to build for programmatic usage.

* Part 4: Allow Ident and TempId in ValuePlace.

The parser will never produce these, since determining whether an
integer/keyword or string is an ident or a tempid, respectively, in
the value place requires the schema.

But a builder that produces `Entity` instances directly will want to
produce these.
2018-05-15 00:43:07 -07:00
Nick Alexander 46c2a0801f Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.

Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once.  However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times.  It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor.  This will allow us to address #532, which is really about
whether we treat inputs as sets.  A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.

This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient.  It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.

This transition to Datomic-like input-as-set semantics allows us to
address #532.  Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics.  With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.

To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels

We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.

Along the way, I've fixed and added tests for two small errors in the
transactor.  First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.

I don't know quite what happened here.  The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
9a9dfb502a/src/common/datomish/transact.cljc (L436))
and then allocated complex upserts if they didn't resolve (see
9a9dfb502a/src/common/datomish/transact.cljc (L509)).

Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-05-14 15:22:45 -07:00
Emily Toop 013629dec6
iOS and Android (Java) sdk framework (#643)
Documents the FFI layer for Mentat, and provides transaction functionality via an EDN string. Creates two native libraries for iOS (Swift) and Android (Java) and fully tests the FFI for both platforms.

Closes #619 #614 #611
2018-05-14 16:20:36 +01:00
Richard Newman 3dc68bcd38 Combine NamespacedKeyword and Keyword. (#689) r=nalexander
* Make properties on NamespacedKeyword/NamespacedSymbol private

* Use only a single String for NamespacedKeyword/NamespacedSymbol

* Review comments.

* Remove unsafe code in namespaced_name.

Benchmarking shows approximately zero change.

* Allow the types of ns and name to differ when constructing a NamespacedName.

* Make symbol namespaces optional.

* Normalize names of keyword/symbol constructors.

This will make the subsequent refactor much less painful.

* Use expect not unwrap.

* Merge Keyword and NamespacedKeyword.
2018-05-11 09:52:17 -07:00
Nick Alexander cbffe5e545 Use rust-peg for tx parsing.
There are few reasons to do this:

- it's difficult to add symbol interning to combine-based parsers like
  tx-parser -- literally every type changes to reflect the interner,
  and that means every convenience macro we've built needs to chagne.
  It's trivial to add interning to rust-peg-based parsers.

- combine has rolled forward to 3.2, and I spent a similar amount of
  time investigating how to upgrade tx-parser (to take advantage of
  the new parser! macros in combine that I think are necessary for
  adapting to changing types) as I did just converting to rust-peg.

- it's easy to improve the error messages in rust-peg, where-as I have
  tried twice to improve the nested error messages in combine and am
  stumped.

- it's roughly 4x faster to parse strings directly as opposed to
  edn::ValueAndSpan, and it'll be even better when we intern directly.
2018-05-10 10:24:05 -07:00
Richard Newman e21156a754
Implement simple pull expressions (#638) r=nalexander
* Refactor AttributeCache populator code for use from pull.

* Pre: add to_value_rc to Cloned.

* Pre: add From<StructuredMap> for Binding.

* Pre: clarify Store::open_empty.

* Pre: StructuredMap cleanup.

* Pre: clean up a doc test.

* Split projector crate. Pass schema to projector.

* CLI support for printing bindings.

* Add and use ConjoiningClauses::derive_types_from_find_spec.

* Define pull types.

* Implement pull on top of the attribute cache layer.

* Add pull support to the projector.

* Parse pull expressions.

* Add simple pull support to connection objects.

* Tests for pull.

* Compile with Rust 1.25.

The only choice involved in this commit is that of replacing the
anonymous lifetime '_ with a named lifetime for the cache; since we're
accepting a Known, which includes the cache in question, I think it's
clear that we expect the function to apply to any given cache
lifetime.

* Review comments.

* Bail on unnamed attribute.

* Make assert_parse_failure_contains safe to use.

* Rework query parser to report better errors for pull.

* Test for mixed wildcard and simple attribute.
2018-05-04 12:56:00 -07:00
Nick Alexander 2b82ffb2e5 [tx] Fail transactions where complex upserts resolve to multiple entids. (#670)
This innocuous looking change (upserts_ev -> upserts_e -> resolved in
all situations, rather than upserts_ev -> resolved in some situations)
is a significant change in semantics and assumptions in the
transactor.  Witness the large comment being removed about the same
tempid resolving in different generations!

To support this change, we provide more holistic errors for
conflicting upserts, which entails collecting some (relatively
expensive) diagnostic data.

I left in some debug logging, simply since it shouldn't hurt in
general, and will likely be useful for the next bug we see in the
transactor.
2018-05-01 15:34:44 -07:00
Richard Newman a2e13f624c
Add 'Binding', a structured value type to return from queries. (#657) r=nalexander
Bump to 0.7: breaking change.
2018-04-24 15:08:38 -07:00
Richard Newman a74a2deffc
Introduce RelResult rather than Vec<Vec<TypedValue>>. (#639) r=nalexander
* Pre: clean up core/src/lib.rs.
* Pre: use indexmap 1.0 in db and query-projector.
* Change rel results to be a RelResult instance, not a Vec<Vec<TypedValue>>.

This avoids memory fragmentation and improves locality by using a single
heap-allocated vector for all bindings, rather than a separate
heap-allocated vector for each row.

We hide this abstraction behind the `RelResult` type, which tracks the
stride length (width) of each row.

* Don't allocate temporary vectors when projecting RelResults.
2018-04-24 15:04:00 -07:00
Richard Newman 909b2a8be5 Refactoring: split up the projector crate. No other code changes. 2018-04-09 10:26:09 -07:00
Richard Newman 3f8464e8ed Implement vocabulary-driven schema upgrades. (#595) r=emily 2018-04-09 09:47:49 -07:00
Richard Newman a5cda7c3e9 Allow passing a TermBuilder to be transacted by InProgress; add TermBuilder::is_empty. r=emily 2018-04-09 09:47:49 -07:00
Emily Toop 175958e754 Address review comments @rnewman 2018-04-06 10:46:15 +01:00
Emily Toop 9741435026 Add helper functions for FFI. Many of these will go away as we expose the entity builder 2018-04-06 10:46:15 +01:00
Emily Toop 7382e3297d Add QueryBuilder to make querying over FFI easier 2018-04-06 10:46:15 +01:00
Richard Newman 4d8e179a59
Expose component_attributes on Schema. (#623) r=nalexander
Some parts of the query engine and transactor need to know whether an
attribute is a component attribute, and sometimes want to do so in
a generated SQL query. This is one way to do that.
2018-04-03 14:25:53 -07:00
Richard Newman 6c54e1d370
Support :db/noHistory for attributes. (#622) r=nalexander
At this point we never discard history, but this completes the API support for doing so.
2018-04-03 14:23:46 -07:00
Richard Newman 9cc5cbf288
Rename the helpful variant, AttributeBuilder::new, to AttributeBuilder::helpful. (#625) r=nalexander 2018-04-03 14:23:20 -07:00
Emily Toop 9f30fe6295 Create Mentat FFI and expose observers (#574)
* Tidy up and add txid at beginning of transaction

* Add ffi crate and new_store function

* Add register and unregister observer FFI, Store and Conn functions.
Also add android logging facilities

* Add function for fetching entids for attribute strings

* Add functions for iterating through TxReports

* Add sync to ffi boundary

* Move Extern types from submodule to lib in FFI.
For some reason, if these types are in a submodule, even if they are publically used, the functions inside the FFI are not found in
Android. Works for iOS though. To be investigated later....

* Return to passing TxReports to observer function.
Also, remove some debug

* Expose DateTime and Utc publically

* Use Store in observer tests
2018-03-20 19:16:32 +00:00
Emily Toop ab957948b4 Move to using watcher.
Simplify.

This has a watcher collect txid -> AttributeSet mappings each time a
transact occurs. On commit we retrieve those mappings and hand them over
to the observer service, which filters them and packages them up for
dispatch.

Tidy up
2018-03-20 16:27:35 +00:00
Emily Toop d4365fa4cd Execute commands in a separate thread
Command Queue Executor to watch for new commands and execute on longer running background thread
2018-03-20 16:27:35 +00:00
Emily Toop ecc4a7a35a Add tests 2018-03-20 16:27:35 +00:00
Emily Toop c2e5052877 Allow registration and unregistration of transaction observers from Conn 2018-03-20 16:27:35 +00:00
Emily Toop 9f3d2c08b2 Populate changeset of attributes inside TxReport during transact.
Batch up TxReports for entire transaction.
Notify observers about committed transaction.
Store transaction observer service inside Conn
2018-03-20 16:27:35 +00:00
Richard Newman 833ff92436
Simple aggregates. (#584) r=emily
* Pre: use debugcli in VSCode.
* Pre: wrap subqueries in parentheses in output SQL.
* Pre: add ExistingColumn.

This lets us make reference to columns by name, rather than only
pointing to qualified aliases.

* Pre: add Into for &str to TypedValue.
* Pre: add Store.transact.
* Pre: cleanup.
* Parse and algebrize simple aggregates. (#312)
* Follow-up: print aggregate columns more neatly in the CLI.
* Useful ValueTypeSet helpers.
* Allow for entity inequalities.
* Add 'differ', which is a ref-specialized not-equals.
* Add 'unpermute', a function for getting unique, distinct pairs from bindings.
* Review comments.
* Add 'the' pseudo-aggregation operator.

This allows for a corresponding value to be returned when a query
includes one 'min' or 'max' aggregate.
2018-03-12 15:18:50 -07:00
Richard Newman f42ae35b70 Update cache on write. (#566) r=emily
* Use the cache to make constant queries super fast.
* Fix translate tests to match: we no longer generate SQL for many of them!
* Accumulate additions and removals into the cache.
    * Make attribute cache clone-on-write; store it in Metadata.
    * Allow caching of fulltext attributes, interning strings.
2018-03-06 09:01:20 -08:00
Richard Newman bead9752bd Add InProgress.import to load transaction data from a file. r=emily
It's not streaming, but it'll do.
2018-03-06 08:59:30 -08:00
Richard Newman e33fe71c47 Rework caching and use it inside the query engine. (#553) r=emily
This puts caching in mentat_db, adds a reverse lookup capability for
unique attributes, and populates bidirectional caches with a single
SQL cursor walk.

Differentiate between begin_read and begin_uncached_read.

Note that we still allow toggling within InProgress, because there might be
transient local state that makes starting a new transaction impossible.
2018-02-21 11:51:45 -08:00
Richard Newman ae91603bd0 Provide an API for creating truly empty stores (#561) r=grisha
* Part 1: split create_current_version.

* Part 2: add Store::create_empty and Conn::empty.

* Part 3 - Expose 'open_empty' command via CLI
2018-02-16 02:01:00 -08:00
Grisha Kruglov 93e5dff9c8
Revised uploader flow (battle-tested); CLI support for sync (#557) r=rnewman 2018-02-16 01:44:28 -08:00
Emily Toop 48ffa20d4c Add retract_kw 2018-02-15 18:47:58 +00:00
Emily Toop 9655c4b85d Add retract to entity builder. (#559) r=rnewman 2018-02-15 09:44:06 -08:00
Richard Newman 01d9b83a9b
Add EntityBuilder.add_kw. (#558) r=emily
* Add EntityBuilder.add_kw.

This allows you to skip your own attribute lookups, at the cost of
potentially doing the work more than once.

Also does value type checking.
2018-02-15 08:22:34 -08:00
Emily Toop dec86bb2c5
publically expose KnownEntid (#554) 2018-02-14 19:06:50 +00:00
Richard Newman 23e11fabe6
Add a var! macro. (#548)(#550) r=emily 2018-02-14 09:32:37 -08:00
Kit Cambridge a6341f6fd6 Implement q_prepare with pre-bound variables. r=rnewman 2018-02-07 21:48:05 -08:00
Emily Toop 715d434945
Create generalized in-memory cache for attributes (#525)
* Nit: Alphabetical ordering of imports

* Create Cache and provide functions for calling it

* Get tests working. Move to using NamespacedKeyword over KnownEntid in function signature

* Add is_cached check to caching tests

* Move lazy and add/remove boolean flags to enums

* Move function definitions into generic trait and implement trait for AttributeCache

* Remove lazy cache and generalize cache

* Update tests

* Eager cache becomes simple key value store. AttributeMap handles attribute storing specifics

* Update tests to test presence of correct values in cache

* Move EagerCache, AttributeValueProvider and ValueProvider into mentat_db

* Add test for get_for_entid

* Add test for lookup attribute

* Make caches cloneable. Add value_for alongside values_for

* Use cache in attribute lookups

* Split test for values and value and add cardinality

* address review feedback r=rnewman
2018-02-07 10:56:12 -08:00
Richard Newman 66e6fef75e Define Store, use TabWriter in the CLI for aligning columnar output. (#540) r=emily
* Define Store, which is a simple container for a SQLite connection and a Conn.
  This is a breaking change.
* Return the FindSpec as part of QueryOutput, not just results.
* Switch to using stderr in appropriate places in CLI.
* Print columns in CLI output.
2018-02-01 09:29:07 -08:00
Richard Newman 37a7c9ea48 Validate attributes installed after open. (#538) r=emily
Make AttributeBuilder optionally helpful, fix tests.
2018-02-01 09:29:04 -08:00
Richard Newman 2614f498be Ergonomics improvements, including a kw macro. (#537) r=emily
* Add TypedValue::instant(micros).
* Add From<f64> for TypedValue.
* Add lookup_values_for_attribute to Conn.
* Add q_explain to Queryable.
* Expose an iterator over FindSpec's columns.
* Export edn from mentat crate. Export QueryExecutionResult.
* Implement Display for Variable and Element.
* Introduce a `kw` macro.

    This allows you to write:

    ```rust
    kw!(:foo/bar)
    ```

    instead of

    ```rust
    NamespacedKeyword::new("foo", "bar")
    ```

    … and it's more efficient, too.

Add `mentat::open`, eliminate use of `mentat_db` in some places.
2018-02-01 09:27:23 -08:00
Richard Newman 6ed5413cd4 Implement simple vocabulary management. (#504) r=emily,nalexander
Bump version to 0.5.1 to reflect this change.
2018-01-23 08:52:15 -08:00
Richard Newman 812f10b3e4 Add an EntityBuilder abstraction. r=nalexander,emily
This includes two other changes:

* Split transact to expose an interface for TermWithTempIds.
* Return TxReport from each InProgress operation, not from commit.
2018-01-23 08:52:09 -08:00
Richard Newman 4acc6d0658 InProgressRead, KnownEntid. r=nalexander,emily
Improve naming of read-only transactions.
    Implement entid_for_type.
    Simplify get_attribute.
    Name ignored var in algebrizer.
    Comment attribute_for_ident.
    Make KnownEntid a core concept.
    Expose lookup_value_for_attribute.
    Implement HasSchema and a new query encapsulation on Conn.
    Pre: export Queryable.
2018-01-23 08:40:18 -08:00
Richard Newman 6797a606b5 Preliminary work for vocabulary management. r=emily,nalexander
Pre: export AttributeBuilder from mentat_db.
Pre: fix module-level comment for tx/src/entities.rs.
Pre: rename some `to_` conversions to `into_`.
Pre: make AttributeBuilder::unique less verbose.
Pre: split out a HasSchema trait to abstract over Schema.
Pre: rename SchemaMap/schema_map to AttributeMap/attribute_map.
Pre: TypedValue/NamespacedKeyword conversions.
Pre: turn Unique and ValueType into TypedValue::Keyword.
Pre: export IntoResult.
Pre: export NamespacedKeyword from mentat_core.
Pre: use intern_set in tx.
Pre: add InternSet::len.
Pre: comment gardening.
Pre: remove inaccurate TODO from TxReport comment.
2018-01-23 08:25:32 -08:00
Richard Newman 224570fb45 Switch InProgress to be mutated in place. r=nalexander,emily
This is a breaking change, and involves a very small additional cost
in managing the partition map, but it makes it much more feasible to
implement traits on InProgress: now they don't need to chain back a
new InProgress each time.

Bump version to 0.5 to reflect the change in InProgress.
2018-01-23 08:14:13 -08:00
Emily Toop 8cc7f0b72e
Expose Uuid from the top level crate (#529) 2018-01-22 16:29:25 +00:00
Thom 023fd9b70b
Add a command to the CLI that displays the SQL and query plan for a query. Fixes #428. (#523) r=rnewman 2018-01-19 19:44:39 -05:00
Richard Newman e8ec59e464
Implement a simple direct lookup API. Fixes #111 (#503) r=grisha
* Add some helpers and refactor how queries are run (once).
* Implement lookup_value_for_attribute.
* Add a multi-value test for lookup_value_for_attribute.
2017-12-11 11:08:10 -08:00
Richard Newman b7fb44a5a6 Add a comment to InProgress. 2017-12-07 12:19:59 -08:00
Richard Newman a3b8fd3022 Add results type unwrapping helpers. 2017-12-06 14:47:29 -08:00
Richard Newman 95b9c7f7f5
Atomic multi-tx (#489). r=emily,nalexander
* Pre: rename begin_transaction to begin_tx_application.

* Take an EXCLUSIVE transaction when bootstrapping, and an IMMEDIATE transaction when writing.

This avoids the remote possibility of another write sneaking in the door
while we're preparing to write, avoids us needing to upgrade locks, etc.

  After a BEGIN IMMEDIATE, no other database connection will be able to write
  to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes
  can continue to read from the database, however.

  An exclusive transaction causes EXCLUSIVE locks to be acquired on all
  databases. After a BEGIN EXCLUSIVE, no other database connection except for
  read_uncommitted connections will be able to read the database and no other
  connection without exception will be able to write the database until the
  transaction is complete.

* Hacky implementation of atomic multi-tx.

* Hold the last report, returning the InProgress from each operation.

* Rewrite transact in terms of InProgress.

* Test rollback.

* Remove unused imports.

* Don't use Rc for transaction reports.

* Pre: break out USER0 as a part boundary constant.

* Export TX0 and USER0 from mentat_db. This is for testing.

* Review comments: commenting.

* Test tempid allocation and rollback.
2017-12-05 07:58:24 -08:00
Richard Newman df90c366af
Partial work from simple aggregates work (#497) r=nalexander
* Pre: make FindQuery, FindSpec, and Element non-Clone.
* Pre: make query translator return a Result.
* Pre: make projection return a Result.
* Pre: refactor query parser in preparation for parsing aggregates.
* Pre: rename PredicateFn -> QueryFunction.
* Pre: expose more about bound variables from CC.
* Pre: move ValueTypeSet to core.
2017-11-30 15:02:07 -08:00
Richard Newman 9a12ced317 Don't allow callers to specify arbitrary new entity IDs. (#447) r=nalexander
This commit adds a check to the partition map that a provided entity ID
has been mentioned (i.e., is present in the start:index range of one of
our partitions).

We introduce a newtype for known entity IDs, using this internally in
the tx expander to track user-provided entids that have passed the above
check (and IDs that we allocate as part of tempid processing). This
newtype is stripped prior to tx assertion.

In order that DB tests can continue to write

  [:db/add 111 :foo/bar 222]

we add an additional fake partition to our test connections, ranging
from 100 to 1000.
2017-06-09 15:45:26 -07:00
Nick Alexander d1ac752de6 Parse without copying; parse keyword maps using macros.
This is a big commit, but it breaks into two conceptual pieces.  The
first is to "parse without copying".  We replace a stream of an owned
collection of edn::ValueAndSpan and instead have a stream of a
borrowed collection of &edn::ValueAndSpan references.  (Generally,
this is represented as an iterator over a slice, but it can be over
other things too.)  Cloning such iterators is constant time, which
improves on cloning an owned collection of edn::ValueAndSpan, which is
linear time in the length of the collection and additional time
depending on the complexity of the EDN values.

The second conceptual piece is to parse keyword maps using a special
parser and a macro to build the parser implementations.  Before, we
created a new edn::ValueAndSpan::Map to represent a keyword map in
vector form; since we're working with &edn::ValueAndSpan references
now, we can't create an &edn::ValueAndSpan reference with an
appropriate lifetime.  Therefore we generalize the concept of
iteration slightly and turn keyword maps in map form into linear
iterators by flattening the value maps.  This is a potentially
obscuring transformation, so we have to take care to protect against
some failure cases.  (See the comments and the tests in the code.)

After these changes, parsing using `combine` is linear time (and
reasonably fast).
2017-05-18 10:17:13 -07:00
Richard Newman c95ec13ffe Begin moving web server to a separate crate. (#448) r=bgrins
This doesn't yet introduce a working Cargo.toml for 'mentatweb', but it
does allow RLS to build correctly without errors, and it reduces the
core library's dependency space, which is more important in the short
term.
2017-05-10 02:25:59 -07:00
Richard Newman 059b9d1182 Expose mentat_core::{TypedValue,ValueType} and conn::{Conn,Metadata}. (#443) 2017-05-03 15:49:29 -07:00
Richard Newman 523d5ea5f1 Bump dependency versions. r=bgrins. (#441) 2017-05-03 12:53:16 -07:00
Richard Newman bc63744aba Add :limit to queries (#420) r=nalexander
* Pre: put query parts in alphabetical order.
* Pre: rename 'input' to 'query' in translate tests.
* Part 1: parse :limit.
* Part 2: validate and escape variable parameters in SQL.
* Part 3: algebrize and translate limits.
2017-04-19 16:16:19 -07:00
Richard Newman bffefe7e6b Review comments for #418. 2017-04-18 13:50:58 -07:00
Richard Newman 60c082b61e Part 4: pass inputs through algebrizing and execution. (#418)
This also adds a test that an `UnboundVariables` error is raised if a
variable mentioned in the `:in` clause isn't bound.
2017-04-18 13:19:50 -07:00
Richard Newman 8ddbc834ae Pre: take Variables instead of Strings in public API, for now. 2017-04-18 13:19:50 -07:00
Brian Grinstead 99b7e89116 Make struct Conn public. (#419) r=rnewman 2017-04-18 11:04:44 -07:00
Nick Alexander 5369f03464 Improve parsing of nested edn::ValueAndSpan streams. r=rnewman (#393)
* Pre: Expose more in edn.

* Pre: Make it easier to work with ValueAndSpan.

with_spans() is a temporary hack, needed only because I don't care to
parse the bootstrap assertions from text right now.

* Part 1a: Add `value_and_span` for parsing nested `edn::ValueAndSpan` instances.

I wasn't able to abstract over `edn::Value` and `edn::ValueAndSpan`;
there are multiple obstacles.  I chose to roll with
`edn::ValueAndSpan` since it exposes the additional span information
that we will want to form good error messages in the future.

* Part 1b: Add keyword_map() parsing an `edn::Value::Vector` into an `edn::Value::map`.

* Part 1c: Add `Log`/`.log(...)` for logging parser progress.

This is a terrible hack, but it sure helps to debug complicated nested
parsers.  I don't even know what a principled approach would look
like; since our parser combinators are so frequently expressed in
code, it's hard to imagine a data-driven interpreter that can help
debug things.

* Part 2: Use `value_and_span` apparatus in tx-parser/.

I break an abstraction boundary by returning a value column
`edn::ValueAndSpan` rather than just an `edn::Value`.  That is, the
transaction processor shouldn't care where the `edn::Value` it is
processing arose -- even we care to track that information we should
bake it into the `Entity` type.  We do this because we need to
dynamically parse the value column to support nested maps, and parsing
requires a full `edn::ValueAndSpan`.  Alternately, we could cheat and
fake the spans when parsing nested maps, but that's potentially
expensive.

* Part 3: Use `value_and_span` apparatus in query-parser/.

* Part 4: Use `value_and_span` apparatus in root crate.

* Review comment: Make Span and SpanPosition Copy.

* Review comment: nits.

* Review comment: Make `or` be `or_exactly`.

I baked the eof checking directly into the parser, rather than using
the skip and eof parsers.  I also took the time to restore some tests
that were mistakenly commented out.

* Review comment: Extract and use def_matches_* macros.

* Review comment: .map() as late as possible.
2017-04-06 10:06:28 -07:00
Richard Newman a5023c70cb Use Rc for TypedValue, Variable, and query Ident keywords. (#395) r=nalexander
Part 1, core: use Rc for String and Keyword.
Part 2, query: use Rc for Variable.
Part 3, sql: use Rc for args in SQLiteQueryBuilder.
Part 4, query-algebrizer: use Rc.
Part 5, db: use Rc.
Part 6, query-parser: use Rc.
Part 7, query-projector: use Rc.
Part 8, query-translator: use Rc.
Part 9, top level: use Rc.
Part 10: intern Ident and IdentOrKeyword.
2017-04-02 21:38:36 -07:00
Richard Newman 97749833d0 Algebrize and translate numeric constraints. (#306) r=nalexander 2017-03-22 10:19:47 -07:00
Nick Alexander 15b4195a6e Schema alteration. Fixes #294 and #295. (#370) r=rnewman
* Pre: Don't retract :db/ident in test.

Datomic (and eventually Mentat) don't allow to retract :db/ident in
this way, so this runs afoul of future work to support mutating
metadata.

* Pre: s/VALUETYPE/VALUE_TYPE/.

This is consistent with the capitalization (which is "valueType") and
the other identifier.

* Pre: Remove some single quotes from error output.

* Part 1: Make materialized views be uniform [e a v value_type_tag].

This looks ahead to a time when we could support arbitrary
user-defined materialized views.  For now, the "idents" materialized
view is those datoms of the form [e :db/ident :namespaced/keyword] and
the "schema" materialized view is those datoms of the form [e a v]
where a is in a particular set of attributes that will become clear in
the following commits.

This change is not backwards compatible, so I'm removing the open
current (really, v2) test.  It'll be re-instated when we get to
https://github.com/mozilla/mentat/issues/194.

* Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output.

* Part 3: Separate `schema_to_mutate` from the `schema` used to interpret.

This is just to keep track of the expected changes during
bootstrapping.  I want bootstrap metadata mutations to flow through
the same code path as metadata mutations during regular transactions;
by differentiating the schema used for interpretation from the schema
that will be updated I expect to be able to apply bootstrap metadata
mutations to an empty schema and have things like materialized views
created (using the regular code paths).

This commit has been re-ordered for conceptual clarity, but it won't
compile because it references the metadata module.  It's possible to
make it compile -- the functionality is there in the schema module --
but it's not worth the rebasing effort until after review (and
possibly not even then, since we'll squash down to a single commit to
land).

* Part 2: Maintain entids separately from idents.

In order to support historical idents, we need to distinguish the
"current" map from entid -> ident from the "complete historical" map
ident -> entid.  This is what Datomic does; in Datomic, an ident is
never retracted (although it can be replaced).  This approach is an
important part of allowing multiple consumers to share a schema
fragment as it migrates forward.

This fixes a limitation of the Clojure implementation, which did not
handle historical idents across knowledge base close and re-open.

The "entids" materialized view is naturally a slice of the "datoms"
table.  The "idents" materialized view is a slice of the
"transactions" table.  I hope that representing in this way, and
casting the problem in this light, might generalize to future
materialized views.

* Pre: Add DiffSet.

* Part 4: Collect mutations to a `Schema`.

I haven't taken your review comment about consuming AttributeBuilder
during each fluent function.  If you read my response and still want
this, I'm happy to do it in review.

* Part 5: Handle :db/ident and :db.{install,alter}/attribute.

This "loops" the committed datoms out of the SQL store and back
through the metadata (schema, but in future also partition map)
processor.  The metadata processor updates the schema and produces a
report of what changed; that report is then used to update the SQL
store.  That update includes:
- the materialized views ("entids", "idents", and "schema");
- if needed, a subset of the datoms themselves (as flags change).

I've left a TODO for handling attribute retraction in the cases that
it makes sense.  I expect that to be straight-forward.

* Review comment: Rename DiffSet to AddRetractAlterSet.

Also adds a little more commentary and a simple test.

* Review comment: Use ToIdent trait.

* Review comment: partially revert "Part 2: Maintain entids separately from idents."

This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401.

Following our discussion, this removes the "entids" materialized
view.  The next commit will remove historical idents from the "idents"
materialized view.

* Post: Use custom Either rather than std::result::Result.

This is not necessary, but it was suggested that we might be paying an
overhead creating Err instances while using error_chain.  That seems
not to be the case, but this change shows that we don't actually use
any of the Result helper methods, so there's no reason to overload
Result.  This change might avoid some future confusion, so I'm going
to land it anyway.

Signed-off-by: Nick Alexander <nalexander@mozilla.com>

* Review comment: Don't preserve historical idents.

* Review comment: More prepared statements when updating materialized views.

* Post: Test altering :db/cardinality and :db/unique.

These tests fail due to a Datomic limitation, namely that the marker
flag :db.alter/attribute can only be asserted once for an attribute!
That is, [:db.part/db :db.alter/attribute :attribute] will only be
transacted at most once.  Since older versions of Datomic required the
:db.alter/attribute flag, I can only imagine they either never wrote
:db.alter/attribute to the store, or they handled it specially.  I'll
need to remove the marker flag system from Mentat in order to address
this fundamental limitation.

* Post: Remove some more single quotes from error output.

* Post: Add assert_transact! macro to unwrap safely.

I was finding it very difficult to track unwrapping errors while
making changes, due to an underlying Mac OS X symbolication issue that
makes running tests with RUST_BACKTRACE=1 so slow that they all time
out.

* Post: Don't expect or recognize :db.{install,alter}/attribute.

I had this all working... except we will never see a repeated
`[:db.part/db :db.alter/attribute :attribute]` assertion in the store!
That means my approach would let you alter an attribute at most one
time.  It's not worth hacking around this; it's better to just stop
expecting (and recognizing) the marker flags.  (We have all the data
to distinguish the various cases that we need without the marker
flags.)

This brings Mentat in line with the thrust of newer Datomic versions,
but isn't compatible with Datomic, because (if I understand correctly)
Datomic automatically adds :db.{install,alter}/attribute assertions to
transactions.

I haven't purged the corresponding :db/ident and schema fragments just
yet:
- we might want them back
- we might want them in order to upgrade v1 and v2 databases to the
  new on-disk layout we're fleshing out (v3?).

* Post: Don't make :db/unique :db.unique/* imply :db/index true.

This patch avoids a potential bug with the "schema" materialized view.
If :db/unique :db.unique/value implies :db/index true, then what
happens when you _retract_ :db.unique/value?  I think Datomic defines
this in some way, but I really want the "schema" materialized view to
be a slice of "datoms" and not have these sort of ambiguities and
persistent effects.  Therefore, to ensure that we don't retract a
schema characteristic and accidentally change more than we intended
to, this patch stops having any schema characteristic imply any other
schema characteristic(s).  To achieve that, I added an
Option<Unique::{Value,Identity}> type to Attribute; this helps with
this patch, and also looks ahead to when we allow to retract
:db/unique attributes.

* Post: Allow to retract :db/ident.

* Post: Include more details about invalid schema changes.

The tests use strings, so they hide the chained errors which do in
fact provide more detail.

* Review comment: Fix outdated comment.

* Review comment: s/_SET/_SQL_LIST/.

* Review comment: Use a sub-select for checking cardinality.

This might be faster in practice.

* Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 13:18:59 -07:00
Richard Newman bf38105fef (#362) Part 4: handle unknown attributes by expanding type codes. r=nalexander
Also, don't run any SQL at all if an algebrized query is known to return no results.
2017-03-08 17:44:27 -08:00
Richard Newman e898df8842 Implement basic query limits. (#361) r=nalexander 2017-03-08 17:41:42 -08:00
Richard Newman 70b112801c Implement projection and querying. (#353) r=nalexander
* Add a failing test for EDN parsing '…'.
* Expose a SQLValueType trait to get value_type_tag values out of a ValueType.
* Add accessors to FindSpec.
* Implement querying.
* Implement rudimentary projection.
* Export mentat_db::new_connection.
* Export symbols from mentat.
* Add rudimentary end-to-end query tests.
2017-03-06 14:40:10 -08:00
Nick Alexander f86b24001f Add top-level Conn. Fixes #296. (#342) r=rnewman
* Add top-level `Conn`. Fixes #296.

This is a little different than the API rnewman and I originally
discussed in https://public.etherpad-mozilla.org/p/db-conn-thoughts.
A few notes:

- I was led to make a `Schema` instance the thing that is shared,
  rather than a `db::DB`.  It's possible that queries will want to
  know the current transaction at some point (to prevent races, or to
  query historical data), but that can be a future consideration.

- The generation number just allows for a cheap comparison.  I don't
  care to handle races to transact just yet; the long term plan might
  be to make embedding applications responsible for avoiding races, or
  we might handle queuing transactions and yielding report futures in
  Mentat itself.

- The sharing of the partition maps is a little more subtle than
  expected.  Partition maps are volatile: a successful Mentat
  transaction always advances the :db.part/tx partition, so it's not
  worth passing references around.  This means that consumers must
  clone in order to maintain just a single clone per transaction.

Clean some cruft.

* Review comments.
2017-03-03 15:03:59 -08:00
Richard Newman d7f323d15d Wire in the start of querying and error_chain at top level. (#349) r=nalexander 2017-02-27 16:17:25 -08:00
Nick Alexander dcd9bcb1ce Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341)
* Pre: Drop unneeded tx0 from search results.

* Pre: Don't require a schema in some of the DB code.

The idea is to separate the transaction applying code, which is
schema-aware, from the concrete storage code, which is just concerned
with getting bits onto disk.

* Pre: Only reference Schema, not DB, in debug module.

This is part of a larger separation of the volatile PartitionMap,
which is modified every transaction, from the stable Schema, which is
infrequently modified.

* Pre: Fix indentation.

* Extract part of DB to new SchemaTypeChecking trait.

* Extract part of DB to new PartitionMapping trait.

* Pre: Don't expect :db.part/tx partition to advance when tx fails.

This fails right now, because we allocate tx IDs even when we shouldn't.

* Sketch a db interface without DB.

* Add ValueParseError; use error-chain in tx-parser.

This can be simplified when
https://github.com/Marwes/combine/issues/86 makes it to a published
release, but this unblocks us for now.  This converts the `combine`
error type `ParseError<&'a [edn::Value]>` to a type with owned
`Vec<edn::Value>` collections, re-using `edn::Value::Vector` for
making them `Display`.

* Pre: Accept Borrow<Schema> instead of just &Schema in debug module.

This makes it easy to use Rc<Schema> or Arc<Schema> without inserting
&* sigils throughout the code.

* Use error-chain in query-parser.

There are a few things to point out here:

- the fine grained error types have been flattened into one crate-wide
  error type; it's pretty easy to regain the granularity as needed.

- edn::ParseError is automatically lifted to
  mentat_query_parser::errors::Error;

- we use mentat_parser_utils::ValueParser to maintain parsing error
  information from `combine`.

* Patch up top-level.

* Review comment: Only `borrow()` once.
2017-02-24 15:33:48 -08:00
Richard Newman 42f03f55a2 Stub out query algebrizer. 2017-02-15 16:01:22 -08:00
Richard Newman 2e303f4837 Stub out mentat::q_once. (#289) r=nalexander
* Leave a pointer to issue 288.
* Re-export mentat_db::types::DB from mentat_db.
* Parse EDN strings in the query parser.
* Export 'public' API from mentat_query_parser's top level.
* Stub out mentat::q_once.
2017-02-13 10:30:02 -08:00
Richard Newman fcdf759399 Rename parser_utils to mentat_parser_utils, clean up imports. (#234) r=vporof 2017-02-02 08:18:04 -08:00
Nick Alexander 506c83c160 Implement basic logging infrastructure. (#205) r=nalexander,victorporof
Signed-off-by: Paul Lange <palango@gmx.de>
2017-01-26 10:43:48 -08:00
Richard Newman 2592506288 Implement parsing of simple :find expressions. (#196) r=nalexander
* Test the mentat_query directory on Travis.

* Export common types from edn.

This allows you to write

  use edn::{PlainSymbol,Keyword};

instead of

  use edn:🔣:{PlainSymbol,Keyword};

* Add an edn::Value::is_keyword predicate.

* Clean up query, preparing for query-parser.

* Make EDN keywords and symbols take Into<String> arguments.

* Implement parsing of simple :find lists.

* Rustfmt query-parser. Split find and query.

* Review comment: values_to_variables now returns a NotAVariableError on failure.

* Review comment: rename gimme to to_parsed_value.

* Review comment: add comments.
2017-01-25 14:06:19 -08:00
Brian Grinstead 71a30fe69f Add beginning of web server for the serve subcommand (#159) 2017-01-13 11:46:00 -08:00
Richard Newman a152e60040 Read EDN keywords and symbols as rich types. Fixes #154. r=nalexander 2017-01-12 09:09:48 -08:00
Brian Grinstead cd9517e5fd Run cargo fmt. r=me 2017-01-10 10:54:37 -08:00
Brian Grinstead 6d10774fc8 Move the bin to src and take on clap dependency for command line arg parsing. Fixes #150. r=rnewman 2017-01-10 10:53:34 -08:00
Richard Newman daddfd3e0f Add query sub-crate, implementing more of the beginnings of the query language. 2017-01-09 12:31:57 -08:00
Richard Newman 476f04e27b Implement a rudimentary Keyword struct and the beginnings of ident/entid. 2017-01-09 12:31:56 -08:00
Richard Newman 7a4c75ba44 Rename to Project Mentat (src). 2017-01-06 17:20:20 -08:00
Brian Grinstead 8a52015422 Take on rusqlite dependency. Fixes #148. r=rnewman 2017-01-06 10:24:04 -06:00
Brian Grinstead 9b8257a725 Create a new crate for the query parser. Fixes #138. r=rnewman
Starting to work out the project layout for sub-crates.  The crate inside query-parser/ is "datomish-query-parser" and the core code in src/ depends on it.
2016-12-16 18:43:47 -08:00
Brian Grinstead 5ac47fd6ff Add a stub CLI tool and run tests on it. Fixes #136. r=rnewman 2016-12-16 14:26:10 -08:00
Brian Grinstead 973c32ff77 Update test boilerplate for running on travis (#134). r=rnewman
* Include a local and external test.
* Add license blocks.
2016-12-16 11:50:08 -08:00
Richard Newman f8682a65fa Initial Rust commit.
If you want to go fast, go alone. If you want to go far, go together.
2016-12-16 10:39:08 -08:00
Richard Newman cbd278dd7e Remove Clojure and JS application code. 2016-12-16 10:32:23 -08:00
Richard Newman 9cc26616a9 Implement unified setup/bootstrapping, bootstrapping new databases in a single transaction. Fixes #125. 0.3.7. 2016-12-16 10:25:17 -08:00
Richard Newman 8e16bee201 Pass existing idents to datoms->schema-fragment, allowing the 'upgrade' of an existing ident to an attribute. 2016-12-16 10:25:17 -08:00
Richard Newman 103a86f440 Add a :none migration for schema management. Fixes #113. r=grisha
This allows for code to run before and after a schema fragment is
added for the first time.

The anticipated use for this is twofold:

1. To do initial setup, e.g., defining global entities.
2. To 'adopt' unmanaged attributes already defined in the store.

This 'pre' would manually alter or retract attributes so that the
transact of the new schema datoms can complete.

For example, if properties :foo/bar and :foo/baz will be unchanged,
but :noo/zob needs to change from a string to an integer, the :none
pre-function can alter the ident, and the :none post-function can
migrate and clean up.
2016-11-23 17:06:04 -08:00
Richard Newman f5f57da113 Rename datomish.places.import to datomish.places.importer to silence warnings. 2016-11-23 09:02:10 -08:00
Richard Newman 6f006f247d Bump to 0.3.1 to bump a dependency. 2016-11-22 11:40:37 -08:00
Richard Newman 99e7fafd1b Change license to Apache. Fixes #74. 2016-11-22 11:40:37 -08:00
Richard Newman 7df48d0599 Add missing Tufte stub function. 2016-11-17 13:28:54 -08:00