Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L436)
and then allocated complex upserts if they didn't resolve (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L509).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-04-30 22:16:05 +00:00
|
|
|
// Copyright 2018 Mozilla
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
|
|
|
|
// this file except in compliance with the License. You may obtain a copy of the
|
|
|
|
// License at http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
// Unless required by applicable law or agreed to in writing, software distributed
|
|
|
|
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
|
|
|
|
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
|
|
// specific language governing permissions and limitations under the License.
|
|
|
|
|
2020-01-14 15:46:21 +00:00
|
|
|
use std::collections::{BTreeMap, BTreeSet};
|
Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L436)
and then allocated complex upserts if they didn't resolve (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L509).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-04-30 22:16:05 +00:00
|
|
|
|
2020-01-14 15:46:21 +00:00
|
|
|
use core_traits::{Entid, TypedValue, ValueType};
|
Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L436)
and then allocated complex upserts if they didn't resolve (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L509).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-04-30 22:16:05 +00:00
|
|
|
|
2020-01-14 15:46:21 +00:00
|
|
|
use db_traits::errors::CardinalityConflict;
|
Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L436)
and then allocated complex upserts if they didn't resolve (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L509).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-04-30 22:16:05 +00:00
|
|
|
|
2020-08-06 03:03:58 +00:00
|
|
|
use crate::internal_types::AEVTrie;
|
Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L436)
and then allocated complex upserts if they didn't resolve (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L509).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-04-30 22:16:05 +00:00
|
|
|
|
|
|
|
/// Map from found [e a v] to expected type.
|
|
|
|
pub(crate) type TypeDisagreements = BTreeMap<(Entid, Entid, TypedValue), ValueType>;
|
|
|
|
|
|
|
|
/// Ensure that the given terms type check.
|
|
|
|
///
|
|
|
|
/// We try to be maximally helpful by yielding every malformed datom, rather than only the first.
|
|
|
|
/// In the future, we might change this choice, or allow the consumer to specify the robustness of
|
|
|
|
/// the type checking desired, since there is a cost to providing helpful diagnostics.
|
|
|
|
pub(crate) fn type_disagreements<'schema>(aev_trie: &AEVTrie<'schema>) -> TypeDisagreements {
|
|
|
|
let mut errors: TypeDisagreements = TypeDisagreements::default();
|
|
|
|
|
|
|
|
for (&(a, attribute), evs) in aev_trie {
|
|
|
|
for (&e, ref ars) in evs {
|
|
|
|
for v in ars.add.iter().chain(ars.retract.iter()) {
|
|
|
|
if attribute.value_type != v.value_type() {
|
|
|
|
errors.insert((e, a, v.clone()), attribute.value_type);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
errors
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Ensure that the given terms obey the cardinality restrictions of the given schema.
|
|
|
|
///
|
|
|
|
/// That is, ensure that any cardinality one attribute is added with at most one distinct value for
|
|
|
|
/// any specific entity (although that one value may be repeated for the given entity).
|
|
|
|
/// It is an error to:
|
|
|
|
///
|
|
|
|
/// - add two distinct values for the same cardinality one attribute and entity in a single transaction
|
|
|
|
/// - add and remove the same values for the same attribute and entity in a single transaction
|
|
|
|
///
|
|
|
|
/// We try to be maximally helpful by yielding every malformed set of datoms, rather than just the
|
|
|
|
/// first set, or even the first conflict. In the future, we might change this choice, or allow the
|
|
|
|
/// consumer to specify the robustness of the cardinality checking desired.
|
2020-01-14 15:46:21 +00:00
|
|
|
pub(crate) fn cardinality_conflicts<'schema>(
|
|
|
|
aev_trie: &AEVTrie<'schema>,
|
|
|
|
) -> Vec<CardinalityConflict> {
|
Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.
Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once. However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times. It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor. This will allow us to address #532, which is really about
whether we treat inputs as sets. A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.
This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient. It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.
This transition to Datomic-like input-as-set semantics allows us to
address #532. Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics. With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.
To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels
We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.
Along the way, I've fixed and added tests for two small errors in the
transactor. First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.
I don't know quite what happened here. The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L436)
and then allocated complex upserts if they didn't resolve (see
https://github.com/mozilla/mentat/blob/9a9dfb502acf5e4cdb1059d4aac831d7603063c8/src/common/datomish/transact.cljc#L509).
Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-04-30 22:16:05 +00:00
|
|
|
let mut errors = vec![];
|
|
|
|
|
|
|
|
for (&(a, attribute), evs) in aev_trie {
|
|
|
|
for (&e, ref ars) in evs {
|
|
|
|
if !attribute.multival && ars.add.len() > 1 {
|
|
|
|
let vs = ars.add.clone();
|
|
|
|
errors.push(CardinalityConflict::CardinalityOneAddConflict { e, a, vs });
|
|
|
|
}
|
|
|
|
|
|
|
|
let vs: BTreeSet<_> = ars.retract.intersection(&ars.add).cloned().collect();
|
|
|
|
if !vs.is_empty() {
|
|
|
|
errors.push(CardinalityConflict::AddRetractConflict { e, a, vs })
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
errors
|
|
|
|
}
|