Lookup refs, nested vector values, map notation. Fixes #180, fixes #183, fixes #284. (#382) r=rnewman

* Pre: Fix error in parser macros.

* Pre: Make test unwrapping more verbose.

* Pre: Make lookup refs be (lookup-ref a v) in the entity position.

This has the advantage of being explicit in all situations and
unambiguous at parse-time.  This choice agrees with the Clojure
implementation but not with Datomic.  Datomic treats [a v] as a lookup
ref, is ambiguous at parse-time, and is disambiguated in ways I do not
understand at transaction time.  We mooted making lookup refs [[a v]]
and outlawing nested value vectors in transactions, but after
implementing that approach I decided it was better to handle lookup
refs at parse time and therefore outlawing nested value vectors is not
necessary.

* Handle lookup refs in the entity and value columns. Fixes #183.

* Pre 0a: Use a stack instead of into_iter.

* Pre 0b: Dedent.

* Pre 0c: Handle `e` after `v`.

This allows to use the original `e` while handling `v`.

* Explode value lists for :db.cardinality/many attributes. Fixes #284.

* Parse and accept map notation. Fixes #180.

* Pre: Modernize add() and retract() into one add_or_retract().

* Pre: Add is_collection and is_atom to edn::Value.

* Pre: Differentiate atoms from lookup-refs in value position.

Initially, I expected to accept arbitrary edn::Value instances in the
value position, and to differentiate in the transactor.  However, the
implementation quickly became a two-stage parser, since we always
wanted to parse the resulting value position into some other known
thing using the tx-parser.  To save calls into the parser and to allow
the parser to move forward with a smaller API surface, I push as much
of this parsing as possible into the initial parse.

* Pre: Modernize entities().

* Pre: Quote edn::Value::Text in Display.

* Review comment: Add and use edn::Value::into_atom.

* Review comment: Use skip(eof()) throughout.

* Review comment: VecDeque instead of Vec.

* Review comment: Part 0: Rename TempId to TempIdHandle.

* Review comment: Part 1: Differentiate internal and external tempids.

This breaks an abstraction boundary by pushing the Internal/External
split up to the Entity level in tx/ and tx-parser/.  This just makes
it easier to explode Entity map notation instances into Entity
instances, taking an existing External tempid :db/id or generating a
new Internal tempid as appropriate.  To do this without breaking the
abstraction boundary would require adding flexibility to the
transaction processor: we'd need to be able to turn Entity instances
into some internal enum and handle the two cases independently.  It
wouldn't be too hard, but this reduces the combinatorial type
explosion.
This commit is contained in:
Nick Alexander 2017-03-27 16:30:04 -07:00 committed by GitHub
parent 88df7b3b33
commit 4b874deae1
11 changed files with 793 additions and 207 deletions

View file

@ -320,21 +320,6 @@ pub fn create_current_version(conn: &mut rusqlite::Connection) -> Result<DB> {
// (<? (<update-from-version db v bootstrapper))))))
// */
pub fn update_from_version(conn: &mut rusqlite::Connection, current_version: i32) -> Result<i32> {
if current_version < 0 || CURRENT_VERSION <= current_version {
bail!(ErrorKind::BadSQLiteStoreVersion(current_version))
}
let tx = conn.transaction()?;
// TODO: actually implement upgrade.
set_user_version(&tx, CURRENT_VERSION)?;
let user_version = get_user_version(&tx)?;
// TODO: use the drop semantics to do this automagically?
tx.commit()?;
Ok(user_version)
}
pub fn ensure_current_version(conn: &mut rusqlite::Connection) -> Result<DB> {
if rusqlite::version_number() < MIN_SQLITE_VERSION {
panic!("Mentat requires at least sqlite {}", MIN_SQLITE_VERSION);
@ -1095,7 +1080,9 @@ mod tests {
macro_rules! assert_matches {
( $input: expr, $expected: expr ) => {{
// Failure to parse the expected pattern is a coding error, so we unwrap.
let pattern_value = edn::parse::value($expected.borrow()).unwrap().without_spans();
let pattern_value = edn::parse::value($expected.borrow())
.expect(format!("to be able to parse expected {}", $expected).as_str())
.without_spans();
assert!($input.matches(&pattern_value),
"Expected value:\n{}\nto match pattern:\n{}\n",
$input.to_pretty(120).unwrap(),
@ -1137,8 +1124,8 @@ mod tests {
fn transact<I>(&mut self, transaction: I) -> Result<TxReport> where I: Borrow<str> {
// Failure to parse the transaction is a coding error, so we unwrap.
let assertions = edn::parse::value(transaction.borrow()).unwrap().without_spans();
let entities: Vec<_> = mentat_tx_parser::Tx::parse(&[assertions][..]).unwrap();
let assertions = edn::parse::value(transaction.borrow()).expect(format!("to be able to parse {} into EDN", transaction.borrow()).as_str()).without_spans();
let entities: Vec<_> = mentat_tx_parser::Tx::parse(&[assertions.clone()][..]).expect(format!("to be able to parse {} into entities", assertions).as_str());
let details = {
// The block scopes the borrow of self.sqlite.
@ -1835,4 +1822,240 @@ mod tests {
[301 :test/fulltext 2]
[301 :test/other 3]]");
}
#[test]
fn test_lookup_refs_entity_column() {
let mut conn = TestConn::default();
// Start by installing a few attributes.
assert_transact!(conn, "[[:db/add 111 :db/ident :test/unique_value]
[:db/add 111 :db/valueType :db.type/string]
[:db/add 111 :db/unique :db.unique/value]
[:db/add 111 :db/index true]
[:db/add 222 :db/ident :test/unique_identity]
[:db/add 222 :db/valueType :db.type/long]
[:db/add 222 :db/unique :db.unique/identity]
[:db/add 222 :db/index true]
[:db/add 333 :db/ident :test/not_unique]
[:db/add 333 :db/cardinality :db.cardinality/one]
[:db/add 333 :db/valueType :db.type/keyword]
[:db/add 333 :db/index true]]");
// And a few datoms to match against.
assert_transact!(conn, "[[:db/add 501 :test/unique_value \"test this\"]
[:db/add 502 :test/unique_value \"other\"]
[:db/add 503 :test/unique_identity -10]
[:db/add 504 :test/unique_identity -20]
[:db/add 505 :test/not_unique :test/keyword]
[:db/add 506 :test/not_unique :test/keyword]]");
// We can resolve lookup refs in the entity column, referring to the attribute as an entid or an ident.
assert_transact!(conn, "[[:db/add (lookup-ref :test/unique_value \"test this\") :test/not_unique :test/keyword]
[:db/add (lookup-ref 111 \"other\") :test/not_unique :test/keyword]
[:db/add (lookup-ref :test/unique_identity -10) :test/not_unique :test/keyword]
[:db/add (lookup-ref 222 -20) :test/not_unique :test/keyword]]");
assert_matches!(conn.last_transaction(),
"[[501 :test/not_unique :test/keyword ?tx true]
[502 :test/not_unique :test/keyword ?tx true]
[503 :test/not_unique :test/keyword ?tx true]
[504 :test/not_unique :test/keyword ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
// We cannot resolve lookup refs that aren't :db/unique.
assert_transact!(conn,
"[[:db/add (lookup-ref :test/not_unique :test/keyword) :test/not_unique :test/keyword]]",
Err("not yet implemented: Cannot resolve (lookup-ref 333 :test/keyword) with attribute that is not :db/unique"));
// We type check the lookup ref's value against the lookup ref's attribute.
assert_transact!(conn,
"[[:db/add (lookup-ref :test/unique_value :test/not_a_string) :test/not_unique :test/keyword]]",
Err("EDN value \':test/not_a_string\' is not the expected Mentat value type String"));
// Each lookup ref in the entity column must resolve
assert_transact!(conn,
"[[:db/add (lookup-ref :test/unique_value \"unmatched string value\") :test/not_unique :test/keyword]]",
Err("no entid found for ident: couldn\'t lookup [a v]: (111, String(\"unmatched string value\"))"));
}
#[test]
fn test_lookup_refs_value_column() {
let mut conn = TestConn::default();
// Start by installing a few attributes.
assert_transact!(conn, "[[:db/add 111 :db/ident :test/unique_value]
[:db/add 111 :db/valueType :db.type/string]
[:db/add 111 :db/unique :db.unique/value]
[:db/add 111 :db/index true]
[:db/add 222 :db/ident :test/unique_identity]
[:db/add 222 :db/valueType :db.type/long]
[:db/add 222 :db/unique :db.unique/identity]
[:db/add 222 :db/index true]
[:db/add 333 :db/ident :test/not_unique]
[:db/add 333 :db/cardinality :db.cardinality/one]
[:db/add 333 :db/valueType :db.type/keyword]
[:db/add 333 :db/index true]
[:db/add 444 :db/ident :test/ref]
[:db/add 444 :db/valueType :db.type/ref]
[:db/add 444 :db/unique :db.unique/identity]
[:db/add 444 :db/index true]]");
// And a few datoms to match against.
assert_transact!(conn, "[[:db/add 501 :test/unique_value \"test this\"]
[:db/add 502 :test/unique_value \"other\"]
[:db/add 503 :test/unique_identity -10]
[:db/add 504 :test/unique_identity -20]
[:db/add 505 :test/not_unique :test/keyword]
[:db/add 506 :test/not_unique :test/keyword]]");
// We can resolve lookup refs in the entity column, referring to the attribute as an entid or an ident.
assert_transact!(conn, "[[:db/add 601 :test/ref (lookup-ref :test/unique_value \"test this\")]
[:db/add 602 :test/ref (lookup-ref 111 \"other\")]
[:db/add 603 :test/ref (lookup-ref :test/unique_identity -10)]
[:db/add 604 :test/ref (lookup-ref 222 -20)]]");
assert_matches!(conn.last_transaction(),
"[[601 :test/ref 501 ?tx true]
[602 :test/ref 502 ?tx true]
[603 :test/ref 503 ?tx true]
[604 :test/ref 504 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
// We cannot resolve lookup refs for attributes that aren't :db/ref.
assert_transact!(conn,
"[[:db/add \"t\" :test/not_unique (lookup-ref :test/unique_value \"test this\")]]",
Err("not yet implemented: Cannot resolve value lookup ref for attribute 333 that is not :db/valueType :db.type/ref"));
// If a value column lookup ref resolves, we can upsert against it. Here, the lookup ref
// resolves to 501, which upserts "t" to 601.
assert_transact!(conn, "[[:db/add \"t\" :test/ref (lookup-ref :test/unique_value \"test this\")]
[:db/add \"t\" :test/not_unique :test/keyword]]");
assert_matches!(conn.last_transaction(),
"[[601 :test/not_unique :test/keyword ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
// Each lookup ref in the value column must resolve
assert_transact!(conn,
"[[:db/add \"t\" :test/ref (lookup-ref :test/unique_value \"unmatched string value\")]]",
Err("no entid found for ident: couldn\'t lookup [a v]: (111, String(\"unmatched string value\"))"));
}
#[test]
fn test_explode_value_lists() {
let mut conn = TestConn::default();
// Start by installing a few attributes.
assert_transact!(conn, "[[:db/add 111 :db/ident :test/many]
[:db/add 111 :db/valueType :db.type/long]
[:db/add 111 :db/cardinality :db.cardinality/many]
[:db/add 222 :db/ident :test/one]
[:db/add 222 :db/valueType :db.type/long]
[:db/add 222 :db/cardinality :db.cardinality/one]]");
// Check that we can explode vectors for :db.cardinality/many attributes.
assert_transact!(conn, "[[:db/add 501 :test/many [1]]
[:db/add 502 :test/many [2 3]]
[:db/add 503 :test/many [4 5 6]]]");
assert_matches!(conn.last_transaction(),
"[[501 :test/many 1 ?tx true]
[502 :test/many 2 ?tx true]
[502 :test/many 3 ?tx true]
[503 :test/many 4 ?tx true]
[503 :test/many 5 ?tx true]
[503 :test/many 6 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
// Check that we can explode nested vectors for :db.cardinality/many attributes.
assert_transact!(conn, "[[:db/add 600 :test/many [1 [2] [[3] [4]] []]]]");
assert_matches!(conn.last_transaction(),
"[[600 :test/many 1 ?tx true]
[600 :test/many 2 ?tx true]
[600 :test/many 3 ?tx true]
[600 :test/many 4 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
// Check that we cannot explode vectors for :db.cardinality/one attributes.
assert_transact!(conn,
"[[:db/add 501 :test/one [1]]]",
Err("not yet implemented: Cannot explode vector value for attribute 222 that is not :db.cardinality :db.cardinality/many"));
assert_transact!(conn,
"[[:db/add 501 :test/one [2 3]]]",
Err("not yet implemented: Cannot explode vector value for attribute 222 that is not :db.cardinality :db.cardinality/many"));
}
#[test]
fn test_explode_map_notation() {
let mut conn = TestConn::default();
// Start by installing a few attributes.
assert_transact!(conn, "[[:db/add 111 :db/ident :test/many]
[:db/add 111 :db/valueType :db.type/long]
[:db/add 111 :db/cardinality :db.cardinality/many]
[:db/add 222 :db/ident :test/component]
[:db/add 222 :db/isComponent true]
[:db/add 222 :db/valueType :db.type/ref]
[:db/add 333 :db/ident :test/unique]
[:db/add 333 :db/unique :db.unique/identity]
[:db/add 333 :db/index true]
[:db/add 333 :db/valueType :db.type/long]
[:db/add 444 :db/ident :test/dangling]
[:db/add 444 :db/valueType :db.type/ref]]");
// Check that we can explode map notation without :db/id.
let report = assert_transact!(conn, "[{:test/many 1}]");
assert_matches!(conn.last_transaction(),
"[[?e :test/many 1 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
assert_matches!(tempids(&report),
"{}");
// Check that we can explode map notation with :db/id, as an entid, ident, and tempid.
let report = assert_transact!(conn, "[{:db/id :db/ident :test/many 1}
{:db/id 500 :test/many 2}
{:db/id \"t\" :test/many 3}]");
assert_matches!(conn.last_transaction(),
"[[1 :test/many 1 ?tx true]
[500 :test/many 2 ?tx true]
[?e :test/many 3 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
assert_matches!(tempids(&report),
"{\"t\" 65537}");
// Check that we can explode map notation with nested vector values.
let report = assert_transact!(conn, "[{:test/many [1 2]}]");
assert_matches!(conn.last_transaction(),
"[[?e :test/many 1 ?tx true]
[?e :test/many 2 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
assert_matches!(tempids(&report),
"{}");
// Check that we can explode map notation with nested maps if the attribute is
// :db/isComponent true.
let report = assert_transact!(conn, "[{:test/component {:test/many 1}}]");
assert_matches!(conn.last_transaction(),
"[[?e :test/component ?f ?tx true]
[?f :test/many 1 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
assert_matches!(tempids(&report),
"{}");
// Check that we can explode map notation with nested maps if the inner map contains a
// :db/unique :db.unique/identity attribute.
let report = assert_transact!(conn, "[{:test/dangling {:test/unique 10}}]");
assert_matches!(conn.last_transaction(),
"[[?e :test/dangling ?f ?tx true]
[?f :test/unique 10 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]");
assert_matches!(tempids(&report),
"{}");
// Verify that we can't explode map notation with nested maps if the inner map would be
// dangling.
assert_transact!(conn,
"[{:test/dangling {:test/many 11}}]",
Err("not yet implemented: Cannot explode nested map value that would lead to dangling entity for attribute 444"));
// Verify that we can explode map notation with nested maps, even if the inner map would be
// dangling, if we give a :db/id explicitly.
assert_transact!(conn, "[{:test/dangling {:db/id \"t\" :test/many 11}}]");
}
}

View file

@ -13,6 +13,7 @@
use edn;
use rusqlite;
use mentat_tx_parser;
use types::{Entid, ValueType};
error_chain! {
@ -24,6 +25,10 @@ error_chain! {
Rusqlite(rusqlite::Error);
}
links {
TxParseError(mentat_tx_parser::Error, mentat_tx_parser::ErrorKind);
}
errors {
/// We're just not done yet. Message that the feature is recognized but not yet
/// implemented.
@ -35,7 +40,7 @@ error_chain! {
/// We've been given an EDN value that isn't the correct Mentat type.
BadEDNValuePair(value: edn::types::Value, value_type: ValueType) {
description("EDN value is not the expected Mentat value type")
display("EDN value '{:?}' is not the expected Mentat value type {:?}", value, value_type)
display("EDN value '{}' is not the expected Mentat value type {:?}", value, value_type)
}
/// We've got corrupt data in the SQL store: a value and value_type_tag don't line up.
@ -44,12 +49,12 @@ error_chain! {
display("bad SQL (value_type_tag, value) pair: ({}, {:?})", value_type_tag, value.data_type())
}
/// The SQLite store user_version isn't recognized. This could be an old version of Mentat
/// trying to open a newer version SQLite store; or it could be a corrupt file; or ...
BadSQLiteStoreVersion(version: i32) {
description("bad SQL store user_version")
display("bad SQL store user_version: {}", version)
}
// /// The SQLite store user_version isn't recognized. This could be an old version of Mentat
// /// trying to open a newer version SQLite store; or it could be a corrupt file; or ...
// BadSQLiteStoreVersion(version: i32) {
// description("bad SQL store user_version")
// display("bad SQL store user_version: {}", version)
// }
/// A bootstrap definition couldn't be parsed or installed. This is a programmer error, not
/// a runtime error.

View file

@ -23,14 +23,17 @@ use types::{
Entid,
TypedValue,
};
use mentat_tx::entities::OpType;
use mentat_tx::entities::{
OpType,
TempId,
};
#[derive(Clone,Debug,Eq,Hash,Ord,PartialOrd,PartialEq)]
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum Term<E, V> {
AddOrRetract(OpType, E, Entid, V),
}
#[derive(Clone,Debug,Eq,Hash,Ord,PartialOrd,PartialEq)]
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum Either<L, R> {
Left(L),
Right(R),
@ -41,8 +44,8 @@ use self::Either::*;
pub type EntidOr<T> = Either<Entid, T>;
pub type TypedValueOr<T> = Either<TypedValue, T>;
pub type TempId = Rc<String>;
pub type TempIdMap = HashMap<TempId, Entid>;
pub type TempIdHandle = Rc<TempId>;
pub type TempIdMap = HashMap<TempIdHandle, Entid>;
pub type LookupRef = Rc<AVPair>;
@ -52,11 +55,11 @@ pub type LookupRef = Rc<AVPair>;
#[derive(Clone,Debug,Eq,Hash,Ord,PartialOrd,PartialEq)]
pub enum LookupRefOrTempId {
LookupRef(LookupRef),
TempId(TempId)
TempId(TempIdHandle)
}
pub type TermWithTempIdsAndLookupRefs = Term<EntidOr<LookupRefOrTempId>, TypedValueOr<LookupRefOrTempId>>;
pub type TermWithTempIds = Term<EntidOr<TempId>, TypedValueOr<TempId>>;
pub type TermWithTempIds = Term<EntidOr<TempIdHandle>, TypedValueOr<TempIdHandle>>;
pub type TermWithoutTempIds = Term<Entid, TypedValue>;
pub type Population = Vec<TermWithTempIds>;
@ -81,7 +84,7 @@ impl TermWithTempIds {
/// The reason for this awkward expression is that we're parameterizing over the _type constructor_
/// (`EntidOr` or `TypedValueOr`), which is not trivial to express in Rust. This only works because
/// they're both the same `Result<...>` type with different parameterizations.
pub fn replace_lookup_ref<T, U>(lookup_map: &AVMap, desired_or: Either<T, LookupRefOrTempId>, lift: U) -> errors::Result<Either<T, TempId>> where U: FnOnce(Entid) -> T {
pub fn replace_lookup_ref<T, U>(lookup_map: &AVMap, desired_or: Either<T, LookupRefOrTempId>, lift: U) -> errors::Result<Either<T, TempIdHandle>> where U: FnOnce(Entid) -> T {
match desired_or {
Left(desired) => Ok(Left(desired)), // N.b., must unwrap here -- the ::Left types are different!
Right(other) => {

View file

@ -49,6 +49,7 @@ use std::borrow::Cow;
use std::collections::{
BTreeMap,
BTreeSet,
VecDeque,
};
use db;
@ -60,8 +61,9 @@ use entids;
use errors::{ErrorKind, Result};
use internal_types::{
Either,
LookupRef,
LookupRefOrTempId,
TempId,
TempIdHandle,
TempIdMap,
Term,
TermWithTempIdsAndLookupRefs,
@ -69,11 +71,17 @@ use internal_types::{
TermWithoutTempIds,
replace_lookup_ref};
use mentat_core::{
attribute,
intern_set,
Schema,
};
use mentat_tx::entities as entmod;
use mentat_tx::entities::{Entity, OpType};
use mentat_tx::entities::{
Entity,
OpType,
TempId,
};
use mentat_tx_parser;
use metadata;
use rusqlite;
use schema::{
@ -146,7 +154,7 @@ impl<'conn, 'a> Tx<'conn, 'a> {
/// Given a collection of tempids and the [a v] pairs that they might upsert to, resolve exactly
/// which [a v] pairs do upsert to entids, and map each tempid that upserts to the upserted
/// entid. The keys of the resulting map are exactly those tempids that upserted.
pub fn resolve_temp_id_avs<'b>(&self, temp_id_avs: &'b [(TempId, AVPair)]) -> Result<TempIdMap> {
pub fn resolve_temp_id_avs<'b>(&self, temp_id_avs: &'b [(TempIdHandle, AVPair)]) -> Result<TempIdMap> {
if temp_id_avs.is_empty() {
return Ok(TempIdMap::default());
}
@ -182,62 +190,206 @@ impl<'conn, 'a> Tx<'conn, 'a> {
///
/// The `Term` instances produce share interned TempId and LookupRef handles, and we return the
/// interned handle sets so that consumers can ensure all handles are used appropriately.
fn entities_into_terms_with_temp_ids_and_lookup_refs<I>(&self, entities: I) -> Result<(Vec<TermWithTempIdsAndLookupRefs>, intern_set::InternSet<String>, intern_set::InternSet<AVPair>)> where I: IntoIterator<Item=Entity> {
let mut temp_ids = intern_set::InternSet::new();
// #183: We don't yet support lookup refs, so this isn't mutable. Later, it'll be mutable.
let lookup_refs = intern_set::InternSet::new();
fn entities_into_terms_with_temp_ids_and_lookup_refs<I>(&self, entities: I) -> Result<(Vec<TermWithTempIdsAndLookupRefs>, intern_set::InternSet<TempId>, intern_set::InternSet<AVPair>)> where I: IntoIterator<Item=Entity> {
let mut temp_ids: intern_set::InternSet<TempId> = intern_set::InternSet::new();
let mut lookup_refs: intern_set::InternSet<AVPair> = intern_set::InternSet::new();
entities.into_iter()
.map(|entity: Entity| -> Result<TermWithTempIdsAndLookupRefs> {
match entity {
Entity::AddOrRetract { op, e, a, v } => {
let a: i64 = match a {
entmod::Entid::Entid(ref a) => *a,
entmod::Entid::Ident(ref a) => self.schema.require_entid(&a)?,
};
let intern_lookup_ref = |lookup_refs: &mut intern_set::InternSet<AVPair>, lookup_ref: entmod::LookupRef| -> Result<LookupRef> {
let lr_a: i64 = match lookup_ref.a {
entmod::Entid::Entid(ref a) => *a,
entmod::Entid::Ident(ref a) => self.schema.require_entid(&a)?,
};
let lr_attribute: &Attribute = self.schema.require_attribute_for_entid(lr_a)?;
let attribute: &Attribute = self.schema.require_attribute_for_entid(a)?;
if lr_attribute.unique.is_none() {
bail!(ErrorKind::NotYetImplemented(format!("Cannot resolve (lookup-ref {} {}) with attribute that is not :db/unique", lr_a, lookup_ref.v)))
}
let e = match e {
entmod::EntidOrLookupRefOrTempId::Entid(e) => {
let e: i64 = match e {
entmod::Entid::Entid(ref e) => *e,
entmod::Entid::Ident(ref e) => self.schema.require_entid(&e)?,
};
Either::Left(e)
},
let lr_typed_value: TypedValue = self.schema.to_typed_value(&lookup_ref.v, &lr_attribute)?;
Ok(lookup_refs.intern((lr_a, lr_typed_value)))
};
entmod::EntidOrLookupRefOrTempId::TempId(e) => {
Either::Right(LookupRefOrTempId::TempId(temp_ids.intern(e)))
},
// We want to handle entities in the order they're given to us, while also "exploding" some
// entities into many. We therefore push the initial entities onto the back of the deque,
// take from the front of the deque, and explode onto the front as well.
let mut deque: VecDeque<Entity> = VecDeque::default();
deque.extend(entities);
entmod::EntidOrLookupRefOrTempId::LookupRef(_) => {
// TODO: reference entity and initial input.
bail!(ErrorKind::NotYetImplemented(format!("Transacting lookup-refs is not yet implemented")))
},
};
// Allocate private internal tempids reserved for Mentat. Internal tempids just need to be
// unique within one transaction; they should never escape a transaction.
let mut mentat_id_count = 0;
let mut allocate_mentat_id = move || {
mentat_id_count += 1;
entmod::EntidOrLookupRefOrTempId::TempId(TempId::Internal(mentat_id_count))
};
let v = {
let mut terms: Vec<TermWithTempIdsAndLookupRefs> = Vec::with_capacity(deque.len());
while let Some(entity) = deque.pop_front() {
match entity {
Entity::MapNotation(mut map_notation) => {
// :db/id is optional; if it's not given, we generate a special internal tempid
// to use for upserting. This tempid will not be reported in the TxReport.
let db_id: entmod::EntidOrLookupRefOrTempId = mentat_tx_parser::remove_db_id(&mut map_notation)?.unwrap_or_else(&mut allocate_mentat_id);
// We're not nested, so :db/isComponent is not relevant. We just explode the
// map notation.
for (a, v) in map_notation {
deque.push_front(Entity::AddOrRetract {
op: OpType::Add,
e: db_id.clone(),
a: a,
v: v,
});
}
},
Entity::AddOrRetract { op, e, a, v } => {
let a: i64 = match a {
entmod::Entid::Entid(ref a) => *a,
entmod::Entid::Ident(ref a) => self.schema.require_entid(&a)?,
};
let attribute: &Attribute = self.schema.require_attribute_for_entid(a)?;
let v = match v {
entmod::AtomOrLookupRefOrVectorOrMapNotation::Atom(v) => {
if attribute.value_type == ValueType::Ref && v.is_text() {
Either::Right(LookupRefOrTempId::TempId(temp_ids.intern(v.as_text().unwrap().clone())))
} else if attribute.value_type == ValueType::Ref && v.is_vector() && v.as_vector().unwrap().len() == 2 {
bail!(ErrorKind::NotYetImplemented(format!("Transacting lookup-refs is not yet implemented")))
Either::Right(LookupRefOrTempId::TempId(temp_ids.intern(v.as_text().cloned().map(TempId::External).unwrap())))
} else {
// Here is where we do schema-aware typechecking: we either assert that
// the given value is in the attribute's value set, or (in limited
// cases) coerce the value into the attribute's value set.
let typed_value: TypedValue = self.schema.to_typed_value(&v, &attribute)?;
Either::Left(typed_value)
}
};
},
Ok(Term::AddOrRetract(op, e, a, v))
},
}
})
.collect::<Result<Vec<_>>>()
.map(|terms| (terms, temp_ids, lookup_refs))
entmod::AtomOrLookupRefOrVectorOrMapNotation::LookupRef(lookup_ref) => {
if attribute.value_type != ValueType::Ref {
bail!(ErrorKind::NotYetImplemented(format!("Cannot resolve value lookup ref for attribute {} that is not :db/valueType :db.type/ref", a)))
}
Either::Right(LookupRefOrTempId::LookupRef(intern_lookup_ref(&mut lookup_refs, lookup_ref)?))
},
entmod::AtomOrLookupRefOrVectorOrMapNotation::Vector(vs) => {
if !attribute.multival {
bail!(ErrorKind::NotYetImplemented(format!("Cannot explode vector value for attribute {} that is not :db.cardinality :db.cardinality/many", a)));
}
for vv in vs {
deque.push_front(Entity::AddOrRetract {
op: op.clone(),
e: e.clone(),
a: entmod::Entid::Entid(a),
v: vv,
});
}
continue
},
entmod::AtomOrLookupRefOrVectorOrMapNotation::MapNotation(mut map_notation) => {
// TODO: consider handling this at the tx-parser level. That would be
// more strict and expressive, but it would lead to splitting
// AddOrRetract, which proliferates types and code, or only handling
// nested maps rather than map values, like Datomic does.
if op != OpType::Add {
bail!(ErrorKind::NotYetImplemented(format!("Cannot explode nested map value in :db/retract for attribute {}", a)));
}
if attribute.value_type != ValueType::Ref {
bail!(ErrorKind::NotYetImplemented(format!("Cannot explode nested map value for attribute {} that is not :db/valueType :db.type/ref", a)))
}
// :db/id is optional; if it's not given, we generate a special internal tempid
// to use for upserting. This tempid will not be reported in the TxReport.
let db_id: Option<entmod::EntidOrLookupRefOrTempId> = mentat_tx_parser::remove_db_id(&mut map_notation)?;
let mut dangling = db_id.is_none();
let db_id: entmod::EntidOrLookupRefOrTempId = db_id.unwrap_or_else(&mut allocate_mentat_id);
// We're nested, so we want to ensure we're not creating "dangling"
// entities that can't be reached. If we're :db/isComponent, then this
// is not dangling. Otherwise, the resulting map needs to have a
// :db/unique :db.unique/identity [a v] pair, so that it's reachable.
// Per http://docs.datomic.com/transactions.html: "Either the reference
// to the nested map must be a component attribute, or the nested map
// must include a unique attribute. This constraint prevents the
// accidental creation of easily-orphaned entities that have no identity
// or relation to other entities."
if attribute.component {
dangling = false;
}
for (inner_a, inner_v) in map_notation {
let inner_entid: i64 = match inner_a {
entmod::Entid::Entid(ref a) => *a,
entmod::Entid::Ident(ref a) => self.schema.require_entid(&a)?,
};
let inner_attribute: &Attribute = self.schema.require_attribute_for_entid(inner_entid)?;
if inner_attribute.unique == Some(attribute::Unique::Identity) {
dangling = false;
}
deque.push_front(Entity::AddOrRetract {
op: OpType::Add,
e: db_id.clone(),
a: entmod::Entid::Entid(inner_entid),
v: inner_v,
});
}
if dangling {
bail!(ErrorKind::NotYetImplemented(format!("Cannot explode nested map value that would lead to dangling entity for attribute {}", a)));
}
// Similar, but not identical, to the expansion of the entity position e
// below. This returns Either::Left(TypedValue) instances; that returns
// Either::Left(i64) instances.
match db_id {
entmod::EntidOrLookupRefOrTempId::Entid(e) => {
let e: i64 = match e {
entmod::Entid::Entid(ref e) => *e,
entmod::Entid::Ident(ref e) => self.schema.require_entid(&e)?,
};
Either::Left(TypedValue::Ref(e))
},
entmod::EntidOrLookupRefOrTempId::TempId(e) => {
Either::Right(LookupRefOrTempId::TempId(temp_ids.intern(e)))
},
entmod::EntidOrLookupRefOrTempId::LookupRef(lookup_ref) => {
Either::Right(LookupRefOrTempId::LookupRef(intern_lookup_ref(&mut lookup_refs, lookup_ref)?))
},
}
},
};
let e = match e {
entmod::EntidOrLookupRefOrTempId::Entid(e) => {
let e: i64 = match e {
entmod::Entid::Entid(ref e) => *e,
entmod::Entid::Ident(ref e) => self.schema.require_entid(&e)?,
};
Either::Left(e)
},
entmod::EntidOrLookupRefOrTempId::TempId(e) => {
Either::Right(LookupRefOrTempId::TempId(temp_ids.intern(e)))
},
entmod::EntidOrLookupRefOrTempId::LookupRef(lookup_ref) => {
Either::Right(LookupRefOrTempId::LookupRef(intern_lookup_ref(&mut lookup_refs, lookup_ref)?))
},
};
terms.push(Term::AddOrRetract(op, e, a, v));
},
}
};
Ok((terms, temp_ids, lookup_refs))
}
/// Pipeline stage 2: rewrite `Term` instances with lookup refs into `Term` instances without
@ -262,7 +414,7 @@ impl<'conn, 'a> Tx<'conn, 'a> {
// TODO: move this to the transactor layer.
pub fn transact_entities<I>(&mut self, entities: I) -> Result<TxReport> where I: IntoIterator<Item=Entity> {
// TODO: push these into an internal transaction report?
let mut tempids: BTreeMap<String, Entid> = BTreeMap::default();
let mut tempids: BTreeMap<TempId, Entid> = BTreeMap::default();
// Pipeline stage 1: entities -> terms with tempids and lookup refs.
let (terms_with_temp_ids_and_lookup_refs, tempid_set, lookup_ref_set) = self.entities_into_terms_with_temp_ids_and_lookup_refs(entities)?;
@ -301,7 +453,7 @@ impl<'conn, 'a> Tx<'conn, 'a> {
}
// Allocate entids for tempids that didn't upsert. BTreeSet rather than HashSet so this is deterministic.
let unresolved_temp_ids: BTreeSet<TempId> = generation.temp_ids_in_allocations();
let unresolved_temp_ids: BTreeSet<TempIdHandle> = generation.temp_ids_in_allocations();
// TODO: track partitions for temporary IDs.
let entids = self.partition_map.allocate_entids(":db.part/user", unresolved_temp_ids.len());
@ -323,6 +475,10 @@ impl<'conn, 'a> Tx<'conn, 'a> {
assert!(tempids.contains_key(&**tempid));
}
// Any internal tempid has been allocated by the system and is a private implementation
// detail; it shouldn't be exposed in the final transaction report.
let tempids = tempids.into_iter().filter_map(|(tempid, e)| tempid.into_external().map(|s| (s, e))).collect();
// A transaction might try to add or retract :db/ident assertions or other metadata mutating
// assertions , but those assertions might not make it to the store. If we see a possible
// metadata mutation, we will figure out if any assertions made it through later. This is
@ -396,7 +552,7 @@ impl<'conn, 'a> Tx<'conn, 'a> {
}
self.store.commit_transaction(self.tx_id)?;
}
}
db::update_partition_map(self.store, &self.partition_map)?;

View file

@ -22,7 +22,7 @@ use types::{
};
use internal_types::{
Population,
TempId,
TempIdHandle,
TempIdMap,
Term,
TermWithoutTempIds,
@ -41,11 +41,11 @@ use schema::SchemaBuilding;
/// A "Simple upsert" that looks like [:db/add TEMPID a v], where a is :db.unique/identity.
#[derive(Clone,Debug,Eq,Hash,Ord,PartialOrd,PartialEq)]
struct UpsertE(TempId, Entid, TypedValue);
struct UpsertE(TempIdHandle, Entid, TypedValue);
/// A "Complex upsert" that looks like [:db/add TEMPID a OTHERID], where a is :db.unique/identity
#[derive(Clone,Debug,Eq,Hash,Ord,PartialOrd,PartialEq)]
struct UpsertEV(TempId, Entid, TempId);
struct UpsertEV(TempIdHandle, Entid, TempIdHandle);
/// A generation collects entities into populations at a single evolutionary step in the upsert
/// resolution evolution process.
@ -194,8 +194,8 @@ impl Generation {
}
// Collect id->[a v] pairs that might upsert at this evolutionary step.
pub fn temp_id_avs<'a>(&'a self) -> Vec<(TempId, AVPair)> {
let mut temp_id_avs: Vec<(TempId, AVPair)> = vec![];
pub fn temp_id_avs<'a>(&'a self) -> Vec<(TempIdHandle, AVPair)> {
let mut temp_id_avs: Vec<(TempIdHandle, AVPair)> = vec![];
// TODO: map/collect.
for &UpsertE(ref t, ref a, ref v) in &self.upserts_e {
// TODO: figure out how to make this less expensive, i.e., don't require
@ -208,11 +208,11 @@ impl Generation {
/// After evolution is complete, yield the set of tempids that require entid allocation. These
/// are the tempids that appeared in [:db/add ...] entities, but that didn't upsert to existing
/// entids.
pub fn temp_ids_in_allocations(&self) -> BTreeSet<TempId> {
pub fn temp_ids_in_allocations(&self) -> BTreeSet<TempIdHandle> {
assert!(self.upserts_e.is_empty(), "All upserts should have been upserted, resolved, or moved to the allocated population!");
assert!(self.upserts_ev.is_empty(), "All upserts should have been upserted, resolved, or moved to the allocated population!");
let mut temp_ids: BTreeSet<TempId> = BTreeSet::default();
let mut temp_ids: BTreeSet<TempIdHandle> = BTreeSet::default();
for term in self.allocations.iter() {
match term {

View file

@ -307,6 +307,37 @@ macro_rules! def_common_value_methods {
$t::Map(_) => 13,
}
}
pub fn is_collection(&self) -> bool {
match *self {
$t::Nil => false,
$t::Boolean(_) => false,
$t::Integer(_) => false,
$t::BigInteger(_) => false,
$t::Float(_) => false,
$t::Text(_) => false,
$t::PlainSymbol(_) => false,
$t::NamespacedSymbol(_) => false,
$t::Keyword(_) => false,
$t::NamespacedKeyword(_) => false,
$t::Vector(_) => true,
$t::List(_) => true,
$t::Set(_) => true,
$t::Map(_) => true,
}
}
pub fn is_atom(&self) -> bool {
!self.is_collection()
}
pub fn into_atom(self) -> Option<$t> {
if self.is_atom() {
Some(self)
} else {
None
}
}
}
}
@ -357,7 +388,7 @@ macro_rules! def_common_value_display {
}
}
// TODO: EDN escaping.
$t::Text(ref v) => write!($f, "{}", v),
$t::Text(ref v) => write!($f, "\"{}\"", v),
$t::PlainSymbol(ref v) => v.fmt($f),
$t::NamespacedSymbol(ref v) => v.fmt($f),
$t::Keyword(ref v) => v.fmt($f),

View file

@ -76,7 +76,7 @@ macro_rules! def_parser_fn {
fn inner<I: Stream<Item = $item_type>>($input: I) -> ParseResult<$result_type, I> {
$body
}
parser(inner as fn(I) -> ParseResult<$result_type, I>).expected("$name")
parser(inner as fn(I) -> ParseResult<$result_type, I>).expected(stringify!($name))
}
}
}

View file

@ -22,5 +22,10 @@ error_chain! {
description("error parsing edn values")
display("error parsing edn values:\n{}", value_parse_error)
}
DbIdError {
description("bad :db/id in map notation")
display("bad :db/id in map notation: must either be not present or be an entid, an ident, or a tempid")
}
}
}

View file

@ -10,6 +10,8 @@
#![allow(dead_code)]
use std::iter::once;
extern crate combine;
#[macro_use]
extern crate error_chain;
@ -20,11 +22,23 @@ extern crate mentat_tx;
#[macro_use]
extern crate mentat_parser_utils;
use combine::{any, eof, many, parser, satisfy_map, token, Parser, ParseResult, Stream};
use combine::{eof, many, parser, satisfy_map, token, Parser, ParseResult, Stream};
use combine::combinator::{Expected, FnParser};
use edn::symbols::NamespacedKeyword;
use edn::symbols::{
NamespacedKeyword,
PlainSymbol,
};
use edn::types::Value;
use mentat_tx::entities::{Entid, EntidOrLookupRefOrTempId, Entity, LookupRef, OpType};
use mentat_tx::entities::{
AtomOrLookupRefOrVectorOrMapNotation,
Entid,
EntidOrLookupRefOrTempId,
Entity,
LookupRef,
MapNotation,
OpType,
TempId,
};
use mentat_parser_utils::{ResultParser, ValueParseError};
pub mod errors;
@ -55,119 +69,121 @@ def_parser_fn!(Tx, entid, Value, Entid, input, {
.into()
});
def_parser_fn!(Tx, lookup_ref, Value, LookupRef, input, {
satisfy_map(|x: Value| if let Value::Vector(y) = x {
let mut p = (Tx::<&[Value]>::entid(), any(), eof())
.map(|(a, v, _)| LookupRef { a: a, v: v });
let r = p.parse_lazy(&y[..]).into();
match r {
Ok((r, _)) => Some(r),
_ => None,
}
} else {
None
})
.parse_stream(input)
});
fn value_to_lookup_ref(val: &Value) -> Option<LookupRef> {
val.as_list().and_then(|list| {
let vs: Vec<Value> = list.into_iter().cloned().collect();
let mut p = token(Value::PlainSymbol(PlainSymbol::new("lookup-ref")))
.with((Tx::<&[Value]>::entid(),
Tx::<&[Value]>::atom()))
.skip(eof())
.map(|(a, v)| LookupRef { a: a, v: v });
let r: ParseResult<LookupRef, _> = p.parse_lazy(&vs[..]).into();
r.ok().map(|x| x.0)
})
}
def_value_satisfy_parser_fn!(Tx, lookup_ref, LookupRef, value_to_lookup_ref);
def_parser_fn!(Tx, entid_or_lookup_ref_or_temp_id, Value, EntidOrLookupRefOrTempId, input, {
Tx::<I>::entid().map(|x| EntidOrLookupRefOrTempId::Entid(x))
.or(Tx::<I>::lookup_ref().map(|x| EntidOrLookupRefOrTempId::LookupRef(x)))
.or(Tx::<I>::temp_id().map(|x| EntidOrLookupRefOrTempId::TempId(x)))
Tx::<I>::entid().map(EntidOrLookupRefOrTempId::Entid)
.or(Tx::<I>::lookup_ref().map(EntidOrLookupRefOrTempId::LookupRef))
.or(Tx::<I>::temp_id().map(EntidOrLookupRefOrTempId::TempId))
.parse_lazy(input)
.into()
});
def_parser_fn!(Tx, temp_id, Value, String, input, {
satisfy_map(|x: Value| x.into_text())
def_parser_fn!(Tx, temp_id, Value, TempId, input, {
satisfy_map(|x: Value| x.into_text().map(TempId::External))
.parse_stream(input)
});
// TODO: abstract the "match Vector, parse internal stream" pattern to remove this boilerplate.
def_parser_fn!(Tx, add, Value, Entity, input, {
satisfy_map(|x: Value| -> Option<Entity> {
if let Value::Vector(y) = x {
let mut p = (token(Value::NamespacedKeyword(NamespacedKeyword::new("db", "add"))),
Tx::<&[Value]>::entid_or_lookup_ref_or_temp_id(),
Tx::<&[Value]>::entid(),
// TODO: handle lookup-ref.
any(),
eof())
.map(|(_, e, a, v, _)| {
Entity::AddOrRetract {
op: OpType::Add,
e: e,
a: a,
v: v,
}
});
// TODO: use ok() with a type annotation rather than explicit match.
match p.parse_lazy(&y[..]).into() {
Ok((r, _)) => Some(r),
_ => None,
}
} else {
None
}
})
def_parser_fn!(Tx, atom, Value, Value, input, {
satisfy_map(|x: Value| x.into_atom())
.parse_stream(input)
});
def_parser_fn!(Tx, retract, Value, Entity, input, {
satisfy_map(|x: Value| -> Option<Entity> {
if let Value::Vector(y) = x {
let mut p = (token(Value::NamespacedKeyword(NamespacedKeyword::new("db", "retract"))),
Tx::<&[Value]>::entid_or_lookup_ref_or_temp_id(),
Tx::<&[Value]>::entid(),
// TODO: handle lookup-ref.
any(),
eof())
.map(|(_, e, a, v, _)| {
Entity::AddOrRetract {
op: OpType::Retract,
e: e,
a: a,
v: v,
}
});
// TODO: use ok() with a type annotation rather than explicit match.
match p.parse_lazy(&y[..]).into() {
Ok((r, _)) => Some(r),
_ => None,
}
} else {
None
}
})
.parse_stream(input)
fn value_to_nested_vector(val: &Value) -> Option<Vec<AtomOrLookupRefOrVectorOrMapNotation>> {
val.as_vector().and_then(|vs| {
let mut p = many(Tx::<&[Value]>::atom_or_lookup_ref_or_vector()).skip(eof());
let r: ParseResult<Vec<AtomOrLookupRefOrVectorOrMapNotation>, _> = p.parse_lazy(&vs[..]).into();
r.map(|x| x.0).ok()
})
}
def_value_satisfy_parser_fn!(Tx, nested_vector, Vec<AtomOrLookupRefOrVectorOrMapNotation>, value_to_nested_vector);
def_parser_fn!(Tx, atom_or_lookup_ref_or_vector, Value, AtomOrLookupRefOrVectorOrMapNotation, input, {
Tx::<I>::lookup_ref().map(AtomOrLookupRefOrVectorOrMapNotation::LookupRef)
.or(Tx::<I>::nested_vector().map(AtomOrLookupRefOrVectorOrMapNotation::Vector))
.or(Tx::<I>::map_notation().map(AtomOrLookupRefOrVectorOrMapNotation::MapNotation))
.or(Tx::<I>::atom().map(AtomOrLookupRefOrVectorOrMapNotation::Atom))
.parse_lazy(input)
.into()
});
fn value_to_add_or_retract(val: &Value) -> Option<Entity> {
val.as_vector().and_then(|vs| {
let add = token(Value::NamespacedKeyword(NamespacedKeyword::new("db", "add")))
.map(|_| OpType::Add);
let retract = token(Value::NamespacedKeyword(NamespacedKeyword::new("db", "retract")))
.map(|_| OpType::Retract);
let mut p = (add.or(retract),
Tx::<&[Value]>::entid_or_lookup_ref_or_temp_id(),
Tx::<&[Value]>::entid(),
Tx::<&[Value]>::atom_or_lookup_ref_or_vector())
.skip(eof())
.map(|(op, e, a, v)| {
Entity::AddOrRetract {
op: op,
e: e,
a: a,
v: v,
}
});
let r: ParseResult<Entity, _> = p.parse_lazy(&vs[..]).into();
r.map(|x| x.0).ok()
})
}
def_value_satisfy_parser_fn!(Tx, add_or_retract, Entity, value_to_add_or_retract);
fn value_to_map_notation(val: &Value) -> Option<MapNotation> {
val.as_map().cloned().and_then(|map| {
// Parsing pairs is tricky; parsing sequences is easy.
let avseq: Vec<Value> = map.into_iter().flat_map(|(a, v)| once(a).chain(once(v))).collect();
let av = (Tx::<&[Value]>::entid(),
Tx::<&[Value]>::atom_or_lookup_ref_or_vector())
.map(|(a, v)| -> (Entid, AtomOrLookupRefOrVectorOrMapNotation) { (a, v) });
let mut p = many(av)
.skip(eof())
.map(|avs: Vec<(Entid, AtomOrLookupRefOrVectorOrMapNotation)>| -> MapNotation {
avs.into_iter().collect()
});
let r: ParseResult<MapNotation, _> = p.parse_lazy(&avseq[..]).into();
r.map(|x| x.0).ok()
})
}
def_value_satisfy_parser_fn!(Tx, map_notation, MapNotation, value_to_map_notation);
def_parser_fn!(Tx, entity, Value, Entity, input, {
let mut p = Tx::<I>::add()
.or(Tx::<I>::retract());
let mut p = Tx::<I>::add_or_retract()
.or(Tx::<I>::map_notation().map(Entity::MapNotation));
p.parse_stream(input)
});
def_parser_fn!(Tx, entities, Value, Vec<Entity>, input, {
satisfy_map(|x: Value| -> Option<Vec<Entity>> {
if let Value::Vector(y) = x {
let mut p = (many(Tx::<&[Value]>::entity()), eof()).map(|(es, _)| es);
// TODO: use ok() with a type annotation rather than explicit match.
match p.parse_lazy(&y[..]).into() {
Ok((r, _)) => Some(r),
_ => None,
}
} else {
None
}
})
.parse_stream(input)
});
fn value_to_entities(val: &Value) -> Option<Vec<Entity>> {
val.as_vector().and_then(|vs| {
let mut p = many(Tx::<&[Value]>::entity())
.skip(eof());
let r: ParseResult<Vec<Entity>, _> = p.parse_lazy(&vs[..]).into();
r.ok().map(|x| x.0)
})
}
def_value_satisfy_parser_fn!(Tx, entities, Vec<Entity>, value_to_entities);
impl<'a> Tx<&'a [edn::Value]> {
pub fn parse(input: &'a [edn::Value]) -> std::result::Result<Vec<Entity>, errors::Error> {
(Tx::<_>::entities(), eof())
.map(|(es, _)| es)
Tx::<_>::entities()
.skip(eof())
.parse(input)
.map(|x| x.0)
.map_err::<ValueParseError, _>(|e| e.translate_position(input).into())
@ -175,13 +191,61 @@ impl<'a> Tx<&'a [edn::Value]> {
}
}
impl<'a> Tx<&'a [edn::Value]> {
pub fn parse_entid_or_lookup_ref_or_temp_id(input: &'a [edn::Value]) -> std::result::Result<EntidOrLookupRefOrTempId, errors::Error> {
Tx::<_>::entid_or_lookup_ref_or_temp_id()
.skip(eof())
.parse(input)
.map(|x| x.0)
.map_err::<ValueParseError, _>(|e| e.translate_position(input).into())
.map_err(|e| Error::from_kind(ErrorKind::ParseError(e)))
}
}
/// Remove any :db/id value from the given map notation, converting the returned value into
/// something suitable for the entity position rather than something suitable for a value position.
///
/// This is here simply to not expose some of the internal APIs of the tx-parser.
pub fn remove_db_id(map: &mut MapNotation) -> std::result::Result<Option<EntidOrLookupRefOrTempId>, errors::Error> {
// TODO: extract lazy defined constant.
let db_id_key = Entid::Ident(edn::NamespacedKeyword::new("db", "id"));
let db_id: Option<EntidOrLookupRefOrTempId> = if let Some(id) = map.remove(&db_id_key) {
match id {
AtomOrLookupRefOrVectorOrMapNotation::Atom(v) => {
let db_id = Tx::<_>::parse_entid_or_lookup_ref_or_temp_id(&[v][..])
.chain_err(|| Error::from(ErrorKind::DbIdError))?;
Some(db_id)
},
AtomOrLookupRefOrVectorOrMapNotation::LookupRef(_) |
AtomOrLookupRefOrVectorOrMapNotation::Vector(_) |
AtomOrLookupRefOrVectorOrMapNotation::MapNotation(_) => {
bail!(ErrorKind::DbIdError)
},
}
} else {
None
};
Ok(db_id)
}
#[cfg(test)]
mod tests {
use super::*;
use std::collections::LinkedList;
use combine::Parser;
use edn::symbols::NamespacedKeyword;
use edn::types::Value;
use mentat_tx::entities::{Entid, EntidOrLookupRefOrTempId, Entity, LookupRef, OpType};
use mentat_tx::entities::{
Entid,
EntidOrLookupRefOrTempId,
Entity,
LookupRef,
OpType,
AtomOrLookupRefOrVectorOrMapNotation,
};
use std::collections::BTreeMap;
fn kw(namespace: &str, name: &str) -> Value {
Value::NamespacedKeyword(NamespacedKeyword::new(namespace, name))
@ -201,7 +265,7 @@ mod tests {
e: EntidOrLookupRefOrTempId::Entid(Entid::Ident(NamespacedKeyword::new("test",
"entid"))),
a: Entid::Ident(NamespacedKeyword::new("test", "a")),
v: Value::Text("v".into()),
v: AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v".into())),
},
&[][..])));
}
@ -219,16 +283,20 @@ mod tests {
op: OpType::Retract,
e: EntidOrLookupRefOrTempId::Entid(Entid::Entid(101)),
a: Entid::Ident(NamespacedKeyword::new("test", "a")),
v: Value::Text("v".into()),
v: AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v".into())),
},
&[][..])));
}
#[test]
fn test_lookup_ref() {
let mut list = LinkedList::new();
list.push_back(Value::PlainSymbol(PlainSymbol::new("lookup-ref")));
list.push_back(kw("test", "a1"));
list.push_back(Value::Text("v1".into()));
let input = [Value::Vector(vec![kw("db", "add"),
Value::Vector(vec![kw("test", "a1"),
Value::Text("v1".into())]),
Value::List(list),
kw("test", "a"),
Value::Text("v".into())])];
let mut parser = Tx::entity();
@ -241,8 +309,52 @@ mod tests {
v: Value::Text("v1".into()),
}),
a: Entid::Ident(NamespacedKeyword::new("test", "a")),
v: Value::Text("v".into()),
v: AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v".into())),
},
&[][..])));
}
#[test]
fn test_nested_vector() {
let mut list = LinkedList::new();
list.push_back(Value::PlainSymbol(PlainSymbol::new("lookup-ref")));
list.push_back(kw("test", "a1"));
list.push_back(Value::Text("v1".into()));
let input = [Value::Vector(vec![kw("db", "add"),
Value::List(list),
kw("test", "a"),
Value::Vector(vec![Value::Text("v1".into()), Value::Text("v2".into())])])];
let mut parser = Tx::entity();
let result = parser.parse(&input[..]);
assert_eq!(result,
Ok((Entity::AddOrRetract {
op: OpType::Add,
e: EntidOrLookupRefOrTempId::LookupRef(LookupRef {
a: Entid::Ident(NamespacedKeyword::new("test", "a1")),
v: Value::Text("v1".into()),
}),
a: Entid::Ident(NamespacedKeyword::new("test", "a")),
v: AtomOrLookupRefOrVectorOrMapNotation::Vector(vec![AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v1".into())),
AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v2".into()))]),
},
&[][..])));
}
#[test]
fn test_map_notation() {
let mut expected: MapNotation = BTreeMap::default();
expected.insert(Entid::Ident(NamespacedKeyword::new("db", "id")), AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("t".to_string())));
expected.insert(Entid::Ident(NamespacedKeyword::new("db", "ident")), AtomOrLookupRefOrVectorOrMapNotation::Atom(kw("test", "attribute")));
let mut map: BTreeMap<Value, Value> = BTreeMap::default();
map.insert(kw("db", "id"), Value::Text("t".to_string()));
map.insert(kw("db", "ident"), kw("test", "attribute"));
let input = [Value::Map(map.clone())];
let mut parser = Tx::entity();
let result = parser.parse(&input[..]);
assert_eq!(result,
Ok((Entity::MapNotation(expected),
&[][..])));
}
}

View file

@ -16,7 +16,14 @@ extern crate mentat_tx_parser;
use edn::parse;
use edn::symbols::NamespacedKeyword;
use edn::types::Value;
use mentat_tx::entities::{Entid, EntidOrLookupRefOrTempId, Entity, OpType};
use mentat_tx::entities::{
AtomOrLookupRefOrVectorOrMapNotation,
Entid,
EntidOrLookupRefOrTempId,
Entity,
OpType,
TempId,
};
use mentat_tx_parser::Tx;
#[test]
@ -36,19 +43,19 @@ fn test_entities() {
op: OpType::Add,
e: EntidOrLookupRefOrTempId::Entid(Entid::Entid(101)),
a: Entid::Ident(NamespacedKeyword::new("test", "a")),
v: Value::Text("v".into()),
v: AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v".into())),
},
Entity::AddOrRetract {
op: OpType::Add,
e: EntidOrLookupRefOrTempId::TempId("tempid".into()),
e: EntidOrLookupRefOrTempId::TempId(TempId::External("tempid".into())),
a: Entid::Ident(NamespacedKeyword::new("test", "a")),
v: Value::Text("v".into()),
v: AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("v".into())),
},
Entity::AddOrRetract {
op: OpType::Retract,
e: EntidOrLookupRefOrTempId::Entid(Entid::Entid(102)),
a: Entid::Ident(NamespacedKeyword::new("test", "b")),
v: Value::Text("w".into()),
v: AtomOrLookupRefOrVectorOrMapNotation::Atom(Value::Text("w".into())),
},
]);
}

View file

@ -12,33 +12,74 @@
extern crate edn;
use std::collections::BTreeMap;
use std::fmt;
use self::edn::types::Value;
use self::edn::symbols::NamespacedKeyword;
#[derive(Clone,Debug,Eq,Hash,Ord,PartialOrd,PartialEq)]
/// A tempid, either an external tempid given in a transaction (usually as an `edn::Value::Text`),
/// or an internal tempid allocated by Mentat itself.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum TempId {
External(String),
Internal(i64),
}
impl TempId {
pub fn into_external(self) -> Option<String> {
match self {
TempId::External(s) => Some(s),
TempId::Internal(_) => None,
}
}
pub fn into_internal(self) -> Option<i64> {
match self {
TempId::Internal(x) => Some(x),
TempId::External(_) => None,
}
}
}
impl fmt::Display for TempId {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match self {
&TempId::External(ref s) => write!(f, "{}", s),
&TempId::Internal(x) => write!(f, "<tempid {}>", x),
}
}
}
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum Entid {
Entid(i64),
Ident(NamespacedKeyword),
}
#[derive(Clone, Debug, PartialEq)]
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct LookupRef {
pub a: Entid,
// TODO: consider boxing to allow recursive lookup refs.
pub v: Value,
// In theory we could allow nested lookup-refs. In practice this would require us to process
// lookup-refs in multiple phases, like how we resolve tempids, which isn't worth the effort.
pub v: Value, // An atom.
}
#[derive(Clone, Debug, PartialEq)]
pub enum EntidOrLookupRef {
Entid(Entid),
pub type MapNotation = BTreeMap<Entid, AtomOrLookupRefOrVectorOrMapNotation>;
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum AtomOrLookupRefOrVectorOrMapNotation {
Atom(Value),
LookupRef(LookupRef),
Vector(Vec<AtomOrLookupRefOrVectorOrMapNotation>),
MapNotation(MapNotation),
}
#[derive(Clone, Debug, PartialEq)]
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum EntidOrLookupRefOrTempId {
Entid(Entid),
LookupRef(LookupRef),
TempId(String),
TempId(TempId),
}
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
@ -47,12 +88,15 @@ pub enum OpType {
Retract,
}
#[derive(Clone, Debug, PartialEq)]
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum Entity {
// Like [:db/add|:db/retract e a v].
AddOrRetract {
op: OpType,
e: EntidOrLookupRefOrTempId,
a: Entid,
v: Value,
v: AtomOrLookupRefOrVectorOrMapNotation,
},
// Like {:db/id "tempid" a1 v1 a2 v2}.
MapNotation(MapNotation),
}