Commit graph

28 commits

Author SHA1 Message Date
Greg Burd
4f81c4e15b Attempting to cleanup with clippy, rustfmt, etc.
Integrate https://github.com/mozilla/mentat/pull/806
2020-01-31 10:55:45 -05:00
Greg Burd
b2f92b8461 Update to 2018 edition of Rust (1.42). Fix and format code. Update dependencies. Fix tests. 2020-01-16 10:58:21 -05:00
Grisha Kruglov
9381af4289 Pre: Move core/Attribute* to core-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
cebb85a7fe Pre: Move db/errors.rs into db_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
d0214fad7d Pre: Move core/types.rs into core_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
a57ba5d79f Pre: Move Entid and KnownEntid into core_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
9a47d8905f Pre: 'Into' implementation chaining TermWithoutTempIds -> TermWithTempIds -> TermWithTempIdsAndLookupRefs 2018-07-26 17:14:05 -07:00
Nick Alexander
76507623ac Part 4: Prepare EDN Entity type for interning tempids during parsing.
This is all part of moving the entity builder away from building term
instances and toward building entity instances.  One of the nice
things that the existing term interface does is allow consumers to use
lightweight reference counted tempid handles; I don't want to lose
that, so we'll build it into the entity data structures directly.
2018-07-05 11:17:20 -07:00
Nick Alexander
02a163a10f Part 2: Use ValueRc in InternSet.
We haven't observed performance issues using `Arc` instead of `Rc`,
and we want to be able to include things that are interned (including,
soon, `TempId` instances) in errors coming out of the
transactor.  (And `Rc` isn't `Sync`, so it can't be included in errors
directly.)
2018-07-05 11:16:53 -07:00
Nick Alexander
0e4991fa26 Make db/ use DbErrorKind. 2018-06-27 15:05:43 -07:00
Thom
72a9b302f9
Rename or delete things so that there is only one type named Entid (#768)
* Delete the (apparently unused) EntId

* Rename edn's Entid to EntidOrIdent to avoid confusion with the Entid that's actually an i64

* Fix travis beta bustage (This is actually unrelated to entids, but is a trivial fix nonetheless)
2018-06-26 16:34:18 -07:00
Grisha Kruglov
31de5be64f Convert db/ to failure. 2018-06-20 14:42:34 -07:00
Nick Alexander
e68cc4016c Part 7: Remove tx entirely.
This was left over from #681.
2018-06-04 15:04:39 -07:00
Richard Newman
b2e98f44f6
Generalize Entity by value type. (#701) (#691) r=rnewman
* Part 3: Parameterize Entity by value type.

This isn't quite right, because after parsing, we shouldn't care
about` `edn::ValueAndSpan`, we should care only about edn::Value.
However, I think we can drop `ValueAndSpan` entirely if we just use
`rust-peg` (and its simpler error messages) rather than a mix of
`rust-peg` and `combine`.

In any case, this paves the way to transacting `Entity<TypedValue>`,
which is a nice step towards building general entities.

* Part 1: Add AttributePlace.

* Part 2: Name other places EntityPlace and ValuePlace.

Now we're consistent and closer to self-documenting.  Both matter more
as we expose `Entity` as the thing to build for programmatic usage.

* Part 4: Allow Ident and TempId in ValuePlace.

The parser will never produce these, since determining whether an
integer/keyword or string is an ident or a tempid, respectively, in
the value place requires the schema.

But a builder that produces `Entity` instances directly will want to
produce these.
2018-05-15 00:43:07 -07:00
Nick Alexander
46c2a0801f Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.

Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once.  However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times.  It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor.  This will allow us to address #532, which is really about
whether we treat inputs as sets.  A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.

This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient.  It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.

This transition to Datomic-like input-as-set semantics allows us to
address #532.  Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics.  With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.

To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels

We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.

Along the way, I've fixed and added tests for two small errors in the
transactor.  First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.

I don't know quite what happened here.  The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
9a9dfb502a/src/common/datomish/transact.cljc (L436))
and then allocated complex upserts if they didn't resolve (see
9a9dfb502a/src/common/datomish/transact.cljc (L509)).

Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-05-14 15:22:45 -07:00
Richard Newman
3dc68bcd38 Combine NamespacedKeyword and Keyword. (#689) r=nalexander
* Make properties on NamespacedKeyword/NamespacedSymbol private

* Use only a single String for NamespacedKeyword/NamespacedSymbol

* Review comments.

* Remove unsafe code in namespaced_name.

Benchmarking shows approximately zero change.

* Allow the types of ns and name to differ when constructing a NamespacedName.

* Make symbol namespaces optional.

* Normalize names of keyword/symbol constructors.

This will make the subsequent refactor much less painful.

* Use expect not unwrap.

* Merge Keyword and NamespacedKeyword.
2018-05-11 09:52:17 -07:00
Nick Alexander
cbffe5e545 Use rust-peg for tx parsing.
There are few reasons to do this:

- it's difficult to add symbol interning to combine-based parsers like
  tx-parser -- literally every type changes to reflect the interner,
  and that means every convenience macro we've built needs to chagne.
  It's trivial to add interning to rust-peg-based parsers.

- combine has rolled forward to 3.2, and I spent a similar amount of
  time investigating how to upgrade tx-parser (to take advantage of
  the new parser! macros in combine that I think are necessary for
  adapting to changing types) as I did just converting to rust-peg.

- it's easy to improve the error messages in rust-peg, where-as I have
  tried twice to improve the nested error messages in combine and am
  stumped.

- it's roughly 4x faster to parse strings directly as opposed to
  edn::ValueAndSpan, and it'll be even better when we intern directly.
2018-05-10 10:24:05 -07:00
Nick Alexander
e437944d94 Pre: Don't use tx-parser for destructuring map notation.
This was always a choice, but we've outgrown it: now we want to accept
value types that don't come from EDN and/or tx-parser.
2018-05-10 10:19:54 -07:00
Nick Alexander
4c4af46315 Add TransactableValue abstracting value places that can be transacted.
This is a stepping stone to transacting entities that are not based on
`edn::ValueAndSpan`.  We need to turn some value places (general) into
entity places (restricted), and those restrictions are captured in
tx-parser right now.  But for `TypedValue` value places, those
restrictions are encoded in the type itself.  This lays the track to
accept other value types in value places, which is good for
programmatic builder interfaces.
2018-05-10 10:19:54 -07:00
Richard Newman
1817ce7c0b Performance and cleanup. r=emily
* Use fixed-size arrays for bootstrap datoms, not vecs.
* Wide-ranging cleanup.

    This commit:
    - Deletes some dead code.
    - Marks some functions only used by tests as cfg(test).
    - Adds pub(crate) to a bunch of functions.
    - Cleans up a few other nits.
2018-03-06 09:03:00 -08:00
Richard Newman
4acc6d0658 InProgressRead, KnownEntid. r=nalexander,emily
Improve naming of read-only transactions.
    Implement entid_for_type.
    Simplify get_attribute.
    Name ignored var in algebrizer.
    Comment attribute_for_ident.
    Make KnownEntid a core concept.
    Expose lookup_value_for_attribute.
    Implement HasSchema and a new query encapsulation on Conn.
    Pre: export Queryable.
2018-01-23 08:40:18 -08:00
Richard Newman
c2ec1a6bdf Pre: move Either to mentat_core::util. 2017-06-15 10:28:02 -07:00
Richard Newman
9a12ced317 Don't allow callers to specify arbitrary new entity IDs. (#447) r=nalexander
This commit adds a check to the partition map that a provided entity ID
has been mentioned (i.e., is present in the start:index range of one of
our partitions).

We introduce a newtype for known entity IDs, using this internally in
the tx expander to track user-provided entids that have passed the above
check (and IDs that we allocate as part of tempid processing). This
newtype is stripped prior to tx assertion.

In order that DB tests can continue to write

  [:db/add 111 :foo/bar 222]

we add an additional fake partition to our test connections, ranging
from 100 to 1000.
2017-06-09 15:45:26 -07:00
Nick Alexander
a4fc04ea86 Pre: Crib map_{left,right} for Either. 2017-06-08 10:30:31 -07:00
Nick Alexander
4b874deae1 Lookup refs, nested vector values, map notation. Fixes #180, fixes #183, fixes #284. (#382) r=rnewman
* Pre: Fix error in parser macros.

* Pre: Make test unwrapping more verbose.

* Pre: Make lookup refs be (lookup-ref a v) in the entity position.

This has the advantage of being explicit in all situations and
unambiguous at parse-time.  This choice agrees with the Clojure
implementation but not with Datomic.  Datomic treats [a v] as a lookup
ref, is ambiguous at parse-time, and is disambiguated in ways I do not
understand at transaction time.  We mooted making lookup refs [[a v]]
and outlawing nested value vectors in transactions, but after
implementing that approach I decided it was better to handle lookup
refs at parse time and therefore outlawing nested value vectors is not
necessary.

* Handle lookup refs in the entity and value columns. Fixes #183.

* Pre 0a: Use a stack instead of into_iter.

* Pre 0b: Dedent.

* Pre 0c: Handle `e` after `v`.

This allows to use the original `e` while handling `v`.

* Explode value lists for :db.cardinality/many attributes. Fixes #284.

* Parse and accept map notation. Fixes #180.

* Pre: Modernize add() and retract() into one add_or_retract().

* Pre: Add is_collection and is_atom to edn::Value.

* Pre: Differentiate atoms from lookup-refs in value position.

Initially, I expected to accept arbitrary edn::Value instances in the
value position, and to differentiate in the transactor.  However, the
implementation quickly became a two-stage parser, since we always
wanted to parse the resulting value position into some other known
thing using the tx-parser.  To save calls into the parser and to allow
the parser to move forward with a smaller API surface, I push as much
of this parsing as possible into the initial parse.

* Pre: Modernize entities().

* Pre: Quote edn::Value::Text in Display.

* Review comment: Add and use edn::Value::into_atom.

* Review comment: Use skip(eof()) throughout.

* Review comment: VecDeque instead of Vec.

* Review comment: Part 0: Rename TempId to TempIdHandle.

* Review comment: Part 1: Differentiate internal and external tempids.

This breaks an abstraction boundary by pushing the Internal/External
split up to the Entity level in tx/ and tx-parser/.  This just makes
it easier to explode Entity map notation instances into Entity
instances, taking an existing External tempid :db/id or generating a
new Internal tempid as appropriate.  To do this without breaking the
abstraction boundary would require adding flexibility to the
transaction processor: we'd need to be able to turn Entity instances
into some internal enum and handle the two cases independently.  It
wouldn't be too hard, but this reduces the combinatorial type
explosion.
2017-03-27 16:30:04 -07:00
Nick Alexander
15b4195a6e Schema alteration. Fixes #294 and #295. (#370) r=rnewman
* Pre: Don't retract :db/ident in test.

Datomic (and eventually Mentat) don't allow to retract :db/ident in
this way, so this runs afoul of future work to support mutating
metadata.

* Pre: s/VALUETYPE/VALUE_TYPE/.

This is consistent with the capitalization (which is "valueType") and
the other identifier.

* Pre: Remove some single quotes from error output.

* Part 1: Make materialized views be uniform [e a v value_type_tag].

This looks ahead to a time when we could support arbitrary
user-defined materialized views.  For now, the "idents" materialized
view is those datoms of the form [e :db/ident :namespaced/keyword] and
the "schema" materialized view is those datoms of the form [e a v]
where a is in a particular set of attributes that will become clear in
the following commits.

This change is not backwards compatible, so I'm removing the open
current (really, v2) test.  It'll be re-instated when we get to
https://github.com/mozilla/mentat/issues/194.

* Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output.

* Part 3: Separate `schema_to_mutate` from the `schema` used to interpret.

This is just to keep track of the expected changes during
bootstrapping.  I want bootstrap metadata mutations to flow through
the same code path as metadata mutations during regular transactions;
by differentiating the schema used for interpretation from the schema
that will be updated I expect to be able to apply bootstrap metadata
mutations to an empty schema and have things like materialized views
created (using the regular code paths).

This commit has been re-ordered for conceptual clarity, but it won't
compile because it references the metadata module.  It's possible to
make it compile -- the functionality is there in the schema module --
but it's not worth the rebasing effort until after review (and
possibly not even then, since we'll squash down to a single commit to
land).

* Part 2: Maintain entids separately from idents.

In order to support historical idents, we need to distinguish the
"current" map from entid -> ident from the "complete historical" map
ident -> entid.  This is what Datomic does; in Datomic, an ident is
never retracted (although it can be replaced).  This approach is an
important part of allowing multiple consumers to share a schema
fragment as it migrates forward.

This fixes a limitation of the Clojure implementation, which did not
handle historical idents across knowledge base close and re-open.

The "entids" materialized view is naturally a slice of the "datoms"
table.  The "idents" materialized view is a slice of the
"transactions" table.  I hope that representing in this way, and
casting the problem in this light, might generalize to future
materialized views.

* Pre: Add DiffSet.

* Part 4: Collect mutations to a `Schema`.

I haven't taken your review comment about consuming AttributeBuilder
during each fluent function.  If you read my response and still want
this, I'm happy to do it in review.

* Part 5: Handle :db/ident and :db.{install,alter}/attribute.

This "loops" the committed datoms out of the SQL store and back
through the metadata (schema, but in future also partition map)
processor.  The metadata processor updates the schema and produces a
report of what changed; that report is then used to update the SQL
store.  That update includes:
- the materialized views ("entids", "idents", and "schema");
- if needed, a subset of the datoms themselves (as flags change).

I've left a TODO for handling attribute retraction in the cases that
it makes sense.  I expect that to be straight-forward.

* Review comment: Rename DiffSet to AddRetractAlterSet.

Also adds a little more commentary and a simple test.

* Review comment: Use ToIdent trait.

* Review comment: partially revert "Part 2: Maintain entids separately from idents."

This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401.

Following our discussion, this removes the "entids" materialized
view.  The next commit will remove historical idents from the "idents"
materialized view.

* Post: Use custom Either rather than std::result::Result.

This is not necessary, but it was suggested that we might be paying an
overhead creating Err instances while using error_chain.  That seems
not to be the case, but this change shows that we don't actually use
any of the Result helper methods, so there's no reason to overload
Result.  This change might avoid some future confusion, so I'm going
to land it anyway.

Signed-off-by: Nick Alexander <nalexander@mozilla.com>

* Review comment: Don't preserve historical idents.

* Review comment: More prepared statements when updating materialized views.

* Post: Test altering :db/cardinality and :db/unique.

These tests fail due to a Datomic limitation, namely that the marker
flag :db.alter/attribute can only be asserted once for an attribute!
That is, [:db.part/db :db.alter/attribute :attribute] will only be
transacted at most once.  Since older versions of Datomic required the
:db.alter/attribute flag, I can only imagine they either never wrote
:db.alter/attribute to the store, or they handled it specially.  I'll
need to remove the marker flag system from Mentat in order to address
this fundamental limitation.

* Post: Remove some more single quotes from error output.

* Post: Add assert_transact! macro to unwrap safely.

I was finding it very difficult to track unwrapping errors while
making changes, due to an underlying Mac OS X symbolication issue that
makes running tests with RUST_BACKTRACE=1 so slow that they all time
out.

* Post: Don't expect or recognize :db.{install,alter}/attribute.

I had this all working... except we will never see a repeated
`[:db.part/db :db.alter/attribute :attribute]` assertion in the store!
That means my approach would let you alter an attribute at most one
time.  It's not worth hacking around this; it's better to just stop
expecting (and recognizing) the marker flags.  (We have all the data
to distinguish the various cases that we need without the marker
flags.)

This brings Mentat in line with the thrust of newer Datomic versions,
but isn't compatible with Datomic, because (if I understand correctly)
Datomic automatically adds :db.{install,alter}/attribute assertions to
transactions.

I haven't purged the corresponding :db/ident and schema fragments just
yet:
- we might want them back
- we might want them in order to upgrade v1 and v2 databases to the
  new on-disk layout we're fleshing out (v3?).

* Post: Don't make :db/unique :db.unique/* imply :db/index true.

This patch avoids a potential bug with the "schema" materialized view.
If :db/unique :db.unique/value implies :db/index true, then what
happens when you _retract_ :db.unique/value?  I think Datomic defines
this in some way, but I really want the "schema" materialized view to
be a slice of "datoms" and not have these sort of ambiguities and
persistent effects.  Therefore, to ensure that we don't retract a
schema characteristic and accidentally change more than we intended
to, this patch stops having any schema characteristic imply any other
schema characteristic(s).  To achieve that, I added an
Option<Unique::{Value,Identity}> type to Attribute; this helps with
this patch, and also looks ahead to when we allow to retract
:db/unique attributes.

* Post: Allow to retract :db/ident.

* Post: Include more details about invalid schema changes.

The tests use strings, so they hide the chained errors which do in
fact provide more detail.

* Review comment: Fix outdated comment.

* Review comment: s/_SET/_SQL_LIST/.

* Review comment: Use a sub-select for checking cardinality.

This might be faster in practice.

* Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 13:18:59 -07:00
Joe Walker
40bca2df6d Remove most uses of use foo::* 2017-02-23 14:09:54 +00:00
Nick Alexander
16e9740d8a Implement upsert resolution algorithm. (#186, #283). r=rnewman, f=jsantell
* Pre: Implement batch [a v] pair lookup.

* Pre: Add InternSet for sharing ref-counted handles to large values.

* Pre: Derive more for Entity.

* Pre: Return DB from creating; return TxReport from transact.

I explicitly am not supporting opening existing databases yet, let
alone upgrading databases from earlier versions.  That can follow fast
once basic transactions are supported.

* Pre: Parse string temporary ID entities; remove ValueOrLookupRef.

This adds TempId entities, but we can't disambiguate String temporary
IDs from values without the use of the schema, so there's no new value
branch.  Similarly, we can't disambiguate lookup-ref values from two
element list values without a schema, so we remove this entirely.
We'll handle the ambiguity later in the transactor.

* Persist partitions to SQL store; allocate transaction ID. (#186)

* Post: Test upserting with vectors.

This converts an existing test to EDN:
84a80f40f5/test/datomish/db_test.cljc (L193).

* Implement tempid upsert resolution algorithm. (#184)

* Post: Separate Tx out of DB.

This is very preliminary, since we don't have a real connection type
to manage transactions and their metadata yet.

* Post: Comment on implementation choices in the transactor.

* Review comment: Put long use lists on separate lines.

* Review comment: Accept String: Borrow<S> instead of just String.

* Review comment: Address nits.
2017-02-14 16:50:40 -08:00