mentat/db/src/bootstrap.rs

306 lines
15 KiB
Rust
Raw Normal View History

// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use edn;
use errors::{ErrorKind, Result};
use edn::types::Value;
use edn::symbols;
use entids;
use db::TypedSQLValue;
use mentat_tx::entities::Entity;
use mentat_tx_parser;
use mentat_core::{
IdentMap,
Schema,
TypedValue,
#260 Convert Schema into edn::Value (#384) r=nalexander, r=rnewman * Part 1 - Create as_edn_value function. * Do not include defaults inside output. * Pretty-printed by default. Do we want to make that a flag? * Includes simple test just to make sure it works. * Part 2 - only include ident if available. * Part 3 - Remove spacing and newlines as unnecessary. * Update function to build edn::Value directly rather than parsing from string * Update test to actually test the functionality. * Address review comments ncalexan. * Rename `as_edn_value` to `to_edn_value`. * Move `db/src/values.rs` to `core/src/values.rs` so we can reference inside `core/src/ib.rs`. * Add `lazy-static` crate to core `Cargo.toml` * Expose `values` as a public module from `core`. * Update references to values in `db/src/bootstrap.rs` & `db/src/lib.rs`. * Add new static vars for `DB_FULLTEXT`, `DB_INDEX` & `DB_IS_COMPONENT`. * Use static vars exposed in `values` inside `to_edn_value`. * Remove `db/id` as key in attribute output and use `entid` as `db/ident` if no `ident` is found for that `entid`. * Update test to match new expected output. * Add doc comment for function * Address review comments ncalexan. * Update function docstring to give clearer description of function. * Do not all entid at all to output. * Clean up code fetching ident (make it rustier). * Address review comments rnewman. * Extract out to new `to_edn_value` functions code for creating `edn::Value`\'s for `ValueType` and `Attribute`. * Use `map()` to create schema edn value rather than a loop. * Address review comments rnewman. * pass cloned instance of ident to `Attribute::get_edn_value`. * update `use` import for `edn`. * remove unnecessary call when using ident as key on `associate_ident`. * Fixed bug whereby we didn't differentiate between `db.index/value` and `db.index/identity` when generating `edn::Value` * Add extra assert at the end to ensure we get the same output when we convert the same schema to edn multiple times * Move check for type of uniqueness to `match` statement. * Also use `iter` instead of `into_iter` when iterating schema map.
2017-03-30 10:08:36 +00:00
values,
};
use schema::SchemaBuilding;
use types::{Partition, PartitionMap};
/// The first transaction ID applied to the knowledge base.
///
/// This is the start of the :db.part/tx partition.
pub const TX0: i64 = 0x10000000;
/// This is the start of the :db.part/user partition.
pub const USER0: i64 = 0x10000;
// Corresponds to the version of the :db.schema/core vocabulary.
pub const CORE_SCHEMA_VERSION: u32 = 1;
lazy_static! {
static ref V1_IDENTS: [(symbols::NamespacedKeyword, i64); 40] = {
[(ns_keyword!("db", "ident"), entids::DB_IDENT),
(ns_keyword!("db.part", "db"), entids::DB_PART_DB),
(ns_keyword!("db", "txInstant"), entids::DB_TX_INSTANT),
(ns_keyword!("db.install", "partition"), entids::DB_INSTALL_PARTITION),
Schema alteration. Fixes #294 and #295. (#370) r=rnewman * Pre: Don't retract :db/ident in test. Datomic (and eventually Mentat) don't allow to retract :db/ident in this way, so this runs afoul of future work to support mutating metadata. * Pre: s/VALUETYPE/VALUE_TYPE/. This is consistent with the capitalization (which is "valueType") and the other identifier. * Pre: Remove some single quotes from error output. * Part 1: Make materialized views be uniform [e a v value_type_tag]. This looks ahead to a time when we could support arbitrary user-defined materialized views. For now, the "idents" materialized view is those datoms of the form [e :db/ident :namespaced/keyword] and the "schema" materialized view is those datoms of the form [e a v] where a is in a particular set of attributes that will become clear in the following commits. This change is not backwards compatible, so I'm removing the open current (really, v2) test. It'll be re-instated when we get to https://github.com/mozilla/mentat/issues/194. * Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output. * Part 3: Separate `schema_to_mutate` from the `schema` used to interpret. This is just to keep track of the expected changes during bootstrapping. I want bootstrap metadata mutations to flow through the same code path as metadata mutations during regular transactions; by differentiating the schema used for interpretation from the schema that will be updated I expect to be able to apply bootstrap metadata mutations to an empty schema and have things like materialized views created (using the regular code paths). This commit has been re-ordered for conceptual clarity, but it won't compile because it references the metadata module. It's possible to make it compile -- the functionality is there in the schema module -- but it's not worth the rebasing effort until after review (and possibly not even then, since we'll squash down to a single commit to land). * Part 2: Maintain entids separately from idents. In order to support historical idents, we need to distinguish the "current" map from entid -> ident from the "complete historical" map ident -> entid. This is what Datomic does; in Datomic, an ident is never retracted (although it can be replaced). This approach is an important part of allowing multiple consumers to share a schema fragment as it migrates forward. This fixes a limitation of the Clojure implementation, which did not handle historical idents across knowledge base close and re-open. The "entids" materialized view is naturally a slice of the "datoms" table. The "idents" materialized view is a slice of the "transactions" table. I hope that representing in this way, and casting the problem in this light, might generalize to future materialized views. * Pre: Add DiffSet. * Part 4: Collect mutations to a `Schema`. I haven't taken your review comment about consuming AttributeBuilder during each fluent function. If you read my response and still want this, I'm happy to do it in review. * Part 5: Handle :db/ident and :db.{install,alter}/attribute. This "loops" the committed datoms out of the SQL store and back through the metadata (schema, but in future also partition map) processor. The metadata processor updates the schema and produces a report of what changed; that report is then used to update the SQL store. That update includes: - the materialized views ("entids", "idents", and "schema"); - if needed, a subset of the datoms themselves (as flags change). I've left a TODO for handling attribute retraction in the cases that it makes sense. I expect that to be straight-forward. * Review comment: Rename DiffSet to AddRetractAlterSet. Also adds a little more commentary and a simple test. * Review comment: Use ToIdent trait. * Review comment: partially revert "Part 2: Maintain entids separately from idents." This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401. Following our discussion, this removes the "entids" materialized view. The next commit will remove historical idents from the "idents" materialized view. * Post: Use custom Either rather than std::result::Result. This is not necessary, but it was suggested that we might be paying an overhead creating Err instances while using error_chain. That seems not to be the case, but this change shows that we don't actually use any of the Result helper methods, so there's no reason to overload Result. This change might avoid some future confusion, so I'm going to land it anyway. Signed-off-by: Nick Alexander <nalexander@mozilla.com> * Review comment: Don't preserve historical idents. * Review comment: More prepared statements when updating materialized views. * Post: Test altering :db/cardinality and :db/unique. These tests fail due to a Datomic limitation, namely that the marker flag :db.alter/attribute can only be asserted once for an attribute! That is, [:db.part/db :db.alter/attribute :attribute] will only be transacted at most once. Since older versions of Datomic required the :db.alter/attribute flag, I can only imagine they either never wrote :db.alter/attribute to the store, or they handled it specially. I'll need to remove the marker flag system from Mentat in order to address this fundamental limitation. * Post: Remove some more single quotes from error output. * Post: Add assert_transact! macro to unwrap safely. I was finding it very difficult to track unwrapping errors while making changes, due to an underlying Mac OS X symbolication issue that makes running tests with RUST_BACKTRACE=1 so slow that they all time out. * Post: Don't expect or recognize :db.{install,alter}/attribute. I had this all working... except we will never see a repeated `[:db.part/db :db.alter/attribute :attribute]` assertion in the store! That means my approach would let you alter an attribute at most one time. It's not worth hacking around this; it's better to just stop expecting (and recognizing) the marker flags. (We have all the data to distinguish the various cases that we need without the marker flags.) This brings Mentat in line with the thrust of newer Datomic versions, but isn't compatible with Datomic, because (if I understand correctly) Datomic automatically adds :db.{install,alter}/attribute assertions to transactions. I haven't purged the corresponding :db/ident and schema fragments just yet: - we might want them back - we might want them in order to upgrade v1 and v2 databases to the new on-disk layout we're fleshing out (v3?). * Post: Don't make :db/unique :db.unique/* imply :db/index true. This patch avoids a potential bug with the "schema" materialized view. If :db/unique :db.unique/value implies :db/index true, then what happens when you _retract_ :db.unique/value? I think Datomic defines this in some way, but I really want the "schema" materialized view to be a slice of "datoms" and not have these sort of ambiguities and persistent effects. Therefore, to ensure that we don't retract a schema characteristic and accidentally change more than we intended to, this patch stops having any schema characteristic imply any other schema characteristic(s). To achieve that, I added an Option<Unique::{Value,Identity}> type to Attribute; this helps with this patch, and also looks ahead to when we allow to retract :db/unique attributes. * Post: Allow to retract :db/ident. * Post: Include more details about invalid schema changes. The tests use strings, so they hide the chained errors which do in fact provide more detail. * Review comment: Fix outdated comment. * Review comment: s/_SET/_SQL_LIST/. * Review comment: Use a sub-select for checking cardinality. This might be faster in practice. * Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 20:18:59 +00:00
(ns_keyword!("db.install", "valueType"), entids::DB_INSTALL_VALUE_TYPE),
(ns_keyword!("db.install", "attribute"), entids::DB_INSTALL_ATTRIBUTE),
(ns_keyword!("db", "valueType"), entids::DB_VALUE_TYPE),
(ns_keyword!("db", "cardinality"), entids::DB_CARDINALITY),
(ns_keyword!("db", "unique"), entids::DB_UNIQUE),
(ns_keyword!("db", "isComponent"), entids::DB_IS_COMPONENT),
(ns_keyword!("db", "index"), entids::DB_INDEX),
(ns_keyword!("db", "fulltext"), entids::DB_FULLTEXT),
(ns_keyword!("db", "noHistory"), entids::DB_NO_HISTORY),
(ns_keyword!("db", "add"), entids::DB_ADD),
(ns_keyword!("db", "retract"), entids::DB_RETRACT),
(ns_keyword!("db.part", "user"), entids::DB_PART_USER),
(ns_keyword!("db.part", "tx"), entids::DB_PART_TX),
(ns_keyword!("db", "excise"), entids::DB_EXCISE),
(ns_keyword!("db.excise", "attrs"), entids::DB_EXCISE_ATTRS),
(ns_keyword!("db.excise", "beforeT"), entids::DB_EXCISE_BEFORE_T),
(ns_keyword!("db.excise", "before"), entids::DB_EXCISE_BEFORE),
(ns_keyword!("db.alter", "attribute"), entids::DB_ALTER_ATTRIBUTE),
(ns_keyword!("db.type", "ref"), entids::DB_TYPE_REF),
(ns_keyword!("db.type", "keyword"), entids::DB_TYPE_KEYWORD),
(ns_keyword!("db.type", "long"), entids::DB_TYPE_LONG),
(ns_keyword!("db.type", "double"), entids::DB_TYPE_DOUBLE),
(ns_keyword!("db.type", "string"), entids::DB_TYPE_STRING),
(ns_keyword!("db.type", "uuid"), entids::DB_TYPE_UUID),
(ns_keyword!("db.type", "uri"), entids::DB_TYPE_URI),
(ns_keyword!("db.type", "boolean"), entids::DB_TYPE_BOOLEAN),
(ns_keyword!("db.type", "instant"), entids::DB_TYPE_INSTANT),
(ns_keyword!("db.type", "bytes"), entids::DB_TYPE_BYTES),
(ns_keyword!("db.cardinality", "one"), entids::DB_CARDINALITY_ONE),
(ns_keyword!("db.cardinality", "many"), entids::DB_CARDINALITY_MANY),
(ns_keyword!("db.unique", "value"), entids::DB_UNIQUE_VALUE),
(ns_keyword!("db.unique", "identity"), entids::DB_UNIQUE_IDENTITY),
(ns_keyword!("db", "doc"), entids::DB_DOC),
(ns_keyword!("db.schema", "version"), entids::DB_SCHEMA_VERSION),
(ns_keyword!("db.schema", "attribute"), entids::DB_SCHEMA_ATTRIBUTE),
(ns_keyword!("db.schema", "core"), entids::DB_SCHEMA_CORE),
]
};
static ref V1_PARTS: [(symbols::NamespacedKeyword, i64, i64); 3] = {
[(ns_keyword!("db.part", "db"), 0, (1 + V1_IDENTS.len()) as i64),
(ns_keyword!("db.part", "user"), USER0, USER0),
(ns_keyword!("db.part", "tx"), TX0, TX0),
]
};
static ref V1_CORE_SCHEMA: [(symbols::NamespacedKeyword); 16] = {
[(ns_keyword!("db", "ident")),
(ns_keyword!("db.install", "partition")),
(ns_keyword!("db.install", "valueType")),
(ns_keyword!("db.install", "attribute")),
(ns_keyword!("db", "txInstant")),
(ns_keyword!("db", "valueType")),
(ns_keyword!("db", "cardinality")),
(ns_keyword!("db", "doc")),
(ns_keyword!("db", "unique")),
(ns_keyword!("db", "isComponent")),
(ns_keyword!("db", "index")),
(ns_keyword!("db", "fulltext")),
(ns_keyword!("db", "noHistory")),
(ns_keyword!("db.alter", "attribute")),
(ns_keyword!("db.schema", "version")),
(ns_keyword!("db.schema", "attribute")),
]
};
static ref V1_SYMBOLIC_SCHEMA: Value = {
let s = r#"
{:db/ident {:db/valueType :db.type/keyword
Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341) * Pre: Drop unneeded tx0 from search results. * Pre: Don't require a schema in some of the DB code. The idea is to separate the transaction applying code, which is schema-aware, from the concrete storage code, which is just concerned with getting bits onto disk. * Pre: Only reference Schema, not DB, in debug module. This is part of a larger separation of the volatile PartitionMap, which is modified every transaction, from the stable Schema, which is infrequently modified. * Pre: Fix indentation. * Extract part of DB to new SchemaTypeChecking trait. * Extract part of DB to new PartitionMapping trait. * Pre: Don't expect :db.part/tx partition to advance when tx fails. This fails right now, because we allocate tx IDs even when we shouldn't. * Sketch a db interface without DB. * Add ValueParseError; use error-chain in tx-parser. This can be simplified when https://github.com/Marwes/combine/issues/86 makes it to a published release, but this unblocks us for now. This converts the `combine` error type `ParseError<&'a [edn::Value]>` to a type with owned `Vec<edn::Value>` collections, re-using `edn::Value::Vector` for making them `Display`. * Pre: Accept Borrow<Schema> instead of just &Schema in debug module. This makes it easy to use Rc<Schema> or Arc<Schema> without inserting &* sigils throughout the code. * Use error-chain in query-parser. There are a few things to point out here: - the fine grained error types have been flattened into one crate-wide error type; it's pretty easy to regain the granularity as needed. - edn::ParseError is automatically lifted to mentat_query_parser::errors::Error; - we use mentat_parser_utils::ValueParser to maintain parsing error information from `combine`. * Patch up top-level. * Review comment: Only `borrow()` once.
2017-02-24 23:32:41 +00:00
:db/cardinality :db.cardinality/one
Schema alteration. Fixes #294 and #295. (#370) r=rnewman * Pre: Don't retract :db/ident in test. Datomic (and eventually Mentat) don't allow to retract :db/ident in this way, so this runs afoul of future work to support mutating metadata. * Pre: s/VALUETYPE/VALUE_TYPE/. This is consistent with the capitalization (which is "valueType") and the other identifier. * Pre: Remove some single quotes from error output. * Part 1: Make materialized views be uniform [e a v value_type_tag]. This looks ahead to a time when we could support arbitrary user-defined materialized views. For now, the "idents" materialized view is those datoms of the form [e :db/ident :namespaced/keyword] and the "schema" materialized view is those datoms of the form [e a v] where a is in a particular set of attributes that will become clear in the following commits. This change is not backwards compatible, so I'm removing the open current (really, v2) test. It'll be re-instated when we get to https://github.com/mozilla/mentat/issues/194. * Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output. * Part 3: Separate `schema_to_mutate` from the `schema` used to interpret. This is just to keep track of the expected changes during bootstrapping. I want bootstrap metadata mutations to flow through the same code path as metadata mutations during regular transactions; by differentiating the schema used for interpretation from the schema that will be updated I expect to be able to apply bootstrap metadata mutations to an empty schema and have things like materialized views created (using the regular code paths). This commit has been re-ordered for conceptual clarity, but it won't compile because it references the metadata module. It's possible to make it compile -- the functionality is there in the schema module -- but it's not worth the rebasing effort until after review (and possibly not even then, since we'll squash down to a single commit to land). * Part 2: Maintain entids separately from idents. In order to support historical idents, we need to distinguish the "current" map from entid -> ident from the "complete historical" map ident -> entid. This is what Datomic does; in Datomic, an ident is never retracted (although it can be replaced). This approach is an important part of allowing multiple consumers to share a schema fragment as it migrates forward. This fixes a limitation of the Clojure implementation, which did not handle historical idents across knowledge base close and re-open. The "entids" materialized view is naturally a slice of the "datoms" table. The "idents" materialized view is a slice of the "transactions" table. I hope that representing in this way, and casting the problem in this light, might generalize to future materialized views. * Pre: Add DiffSet. * Part 4: Collect mutations to a `Schema`. I haven't taken your review comment about consuming AttributeBuilder during each fluent function. If you read my response and still want this, I'm happy to do it in review. * Part 5: Handle :db/ident and :db.{install,alter}/attribute. This "loops" the committed datoms out of the SQL store and back through the metadata (schema, but in future also partition map) processor. The metadata processor updates the schema and produces a report of what changed; that report is then used to update the SQL store. That update includes: - the materialized views ("entids", "idents", and "schema"); - if needed, a subset of the datoms themselves (as flags change). I've left a TODO for handling attribute retraction in the cases that it makes sense. I expect that to be straight-forward. * Review comment: Rename DiffSet to AddRetractAlterSet. Also adds a little more commentary and a simple test. * Review comment: Use ToIdent trait. * Review comment: partially revert "Part 2: Maintain entids separately from idents." This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401. Following our discussion, this removes the "entids" materialized view. The next commit will remove historical idents from the "idents" materialized view. * Post: Use custom Either rather than std::result::Result. This is not necessary, but it was suggested that we might be paying an overhead creating Err instances while using error_chain. That seems not to be the case, but this change shows that we don't actually use any of the Result helper methods, so there's no reason to overload Result. This change might avoid some future confusion, so I'm going to land it anyway. Signed-off-by: Nick Alexander <nalexander@mozilla.com> * Review comment: Don't preserve historical idents. * Review comment: More prepared statements when updating materialized views. * Post: Test altering :db/cardinality and :db/unique. These tests fail due to a Datomic limitation, namely that the marker flag :db.alter/attribute can only be asserted once for an attribute! That is, [:db.part/db :db.alter/attribute :attribute] will only be transacted at most once. Since older versions of Datomic required the :db.alter/attribute flag, I can only imagine they either never wrote :db.alter/attribute to the store, or they handled it specially. I'll need to remove the marker flag system from Mentat in order to address this fundamental limitation. * Post: Remove some more single quotes from error output. * Post: Add assert_transact! macro to unwrap safely. I was finding it very difficult to track unwrapping errors while making changes, due to an underlying Mac OS X symbolication issue that makes running tests with RUST_BACKTRACE=1 so slow that they all time out. * Post: Don't expect or recognize :db.{install,alter}/attribute. I had this all working... except we will never see a repeated `[:db.part/db :db.alter/attribute :attribute]` assertion in the store! That means my approach would let you alter an attribute at most one time. It's not worth hacking around this; it's better to just stop expecting (and recognizing) the marker flags. (We have all the data to distinguish the various cases that we need without the marker flags.) This brings Mentat in line with the thrust of newer Datomic versions, but isn't compatible with Datomic, because (if I understand correctly) Datomic automatically adds :db.{install,alter}/attribute assertions to transactions. I haven't purged the corresponding :db/ident and schema fragments just yet: - we might want them back - we might want them in order to upgrade v1 and v2 databases to the new on-disk layout we're fleshing out (v3?). * Post: Don't make :db/unique :db.unique/* imply :db/index true. This patch avoids a potential bug with the "schema" materialized view. If :db/unique :db.unique/value implies :db/index true, then what happens when you _retract_ :db.unique/value? I think Datomic defines this in some way, but I really want the "schema" materialized view to be a slice of "datoms" and not have these sort of ambiguities and persistent effects. Therefore, to ensure that we don't retract a schema characteristic and accidentally change more than we intended to, this patch stops having any schema characteristic imply any other schema characteristic(s). To achieve that, I added an Option<Unique::{Value,Identity}> type to Attribute; this helps with this patch, and also looks ahead to when we allow to retract :db/unique attributes. * Post: Allow to retract :db/ident. * Post: Include more details about invalid schema changes. The tests use strings, so they hide the chained errors which do in fact provide more detail. * Review comment: Fix outdated comment. * Review comment: s/_SET/_SQL_LIST/. * Review comment: Use a sub-select for checking cardinality. This might be faster in practice. * Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 20:18:59 +00:00
:db/index true
Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341) * Pre: Drop unneeded tx0 from search results. * Pre: Don't require a schema in some of the DB code. The idea is to separate the transaction applying code, which is schema-aware, from the concrete storage code, which is just concerned with getting bits onto disk. * Pre: Only reference Schema, not DB, in debug module. This is part of a larger separation of the volatile PartitionMap, which is modified every transaction, from the stable Schema, which is infrequently modified. * Pre: Fix indentation. * Extract part of DB to new SchemaTypeChecking trait. * Extract part of DB to new PartitionMapping trait. * Pre: Don't expect :db.part/tx partition to advance when tx fails. This fails right now, because we allocate tx IDs even when we shouldn't. * Sketch a db interface without DB. * Add ValueParseError; use error-chain in tx-parser. This can be simplified when https://github.com/Marwes/combine/issues/86 makes it to a published release, but this unblocks us for now. This converts the `combine` error type `ParseError<&'a [edn::Value]>` to a type with owned `Vec<edn::Value>` collections, re-using `edn::Value::Vector` for making them `Display`. * Pre: Accept Borrow<Schema> instead of just &Schema in debug module. This makes it easy to use Rc<Schema> or Arc<Schema> without inserting &* sigils throughout the code. * Use error-chain in query-parser. There are a few things to point out here: - the fine grained error types have been flattened into one crate-wide error type; it's pretty easy to regain the granularity as needed. - edn::ParseError is automatically lifted to mentat_query_parser::errors::Error; - we use mentat_parser_utils::ValueParser to maintain parsing error information from `combine`. * Patch up top-level. * Review comment: Only `borrow()` once.
2017-02-24 23:32:41 +00:00
:db/unique :db.unique/identity}
:db.install/partition {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
:db.install/valueType {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
:db.install/attribute {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
;; TODO: support user-specified functions in the future.
;; :db.install/function {:db/valueType :db.type/ref
;; :db/cardinality :db.cardinality/many}
:db/txInstant {:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one
:db/index true}
:db/valueType {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
:db/cardinality {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
:db/doc {:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
:db/unique {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
:db/isComponent {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db/index {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db/fulltext {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db/noHistory {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db.alter/attribute {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
:db.schema/version {:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
;; unique-value because an attribute can only belong to a single
;; schema fragment.
:db.schema/attribute {:db/valueType :db.type/ref
Schema alteration. Fixes #294 and #295. (#370) r=rnewman * Pre: Don't retract :db/ident in test. Datomic (and eventually Mentat) don't allow to retract :db/ident in this way, so this runs afoul of future work to support mutating metadata. * Pre: s/VALUETYPE/VALUE_TYPE/. This is consistent with the capitalization (which is "valueType") and the other identifier. * Pre: Remove some single quotes from error output. * Part 1: Make materialized views be uniform [e a v value_type_tag]. This looks ahead to a time when we could support arbitrary user-defined materialized views. For now, the "idents" materialized view is those datoms of the form [e :db/ident :namespaced/keyword] and the "schema" materialized view is those datoms of the form [e a v] where a is in a particular set of attributes that will become clear in the following commits. This change is not backwards compatible, so I'm removing the open current (really, v2) test. It'll be re-instated when we get to https://github.com/mozilla/mentat/issues/194. * Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output. * Part 3: Separate `schema_to_mutate` from the `schema` used to interpret. This is just to keep track of the expected changes during bootstrapping. I want bootstrap metadata mutations to flow through the same code path as metadata mutations during regular transactions; by differentiating the schema used for interpretation from the schema that will be updated I expect to be able to apply bootstrap metadata mutations to an empty schema and have things like materialized views created (using the regular code paths). This commit has been re-ordered for conceptual clarity, but it won't compile because it references the metadata module. It's possible to make it compile -- the functionality is there in the schema module -- but it's not worth the rebasing effort until after review (and possibly not even then, since we'll squash down to a single commit to land). * Part 2: Maintain entids separately from idents. In order to support historical idents, we need to distinguish the "current" map from entid -> ident from the "complete historical" map ident -> entid. This is what Datomic does; in Datomic, an ident is never retracted (although it can be replaced). This approach is an important part of allowing multiple consumers to share a schema fragment as it migrates forward. This fixes a limitation of the Clojure implementation, which did not handle historical idents across knowledge base close and re-open. The "entids" materialized view is naturally a slice of the "datoms" table. The "idents" materialized view is a slice of the "transactions" table. I hope that representing in this way, and casting the problem in this light, might generalize to future materialized views. * Pre: Add DiffSet. * Part 4: Collect mutations to a `Schema`. I haven't taken your review comment about consuming AttributeBuilder during each fluent function. If you read my response and still want this, I'm happy to do it in review. * Part 5: Handle :db/ident and :db.{install,alter}/attribute. This "loops" the committed datoms out of the SQL store and back through the metadata (schema, but in future also partition map) processor. The metadata processor updates the schema and produces a report of what changed; that report is then used to update the SQL store. That update includes: - the materialized views ("entids", "idents", and "schema"); - if needed, a subset of the datoms themselves (as flags change). I've left a TODO for handling attribute retraction in the cases that it makes sense. I expect that to be straight-forward. * Review comment: Rename DiffSet to AddRetractAlterSet. Also adds a little more commentary and a simple test. * Review comment: Use ToIdent trait. * Review comment: partially revert "Part 2: Maintain entids separately from idents." This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401. Following our discussion, this removes the "entids" materialized view. The next commit will remove historical idents from the "idents" materialized view. * Post: Use custom Either rather than std::result::Result. This is not necessary, but it was suggested that we might be paying an overhead creating Err instances while using error_chain. That seems not to be the case, but this change shows that we don't actually use any of the Result helper methods, so there's no reason to overload Result. This change might avoid some future confusion, so I'm going to land it anyway. Signed-off-by: Nick Alexander <nalexander@mozilla.com> * Review comment: Don't preserve historical idents. * Review comment: More prepared statements when updating materialized views. * Post: Test altering :db/cardinality and :db/unique. These tests fail due to a Datomic limitation, namely that the marker flag :db.alter/attribute can only be asserted once for an attribute! That is, [:db.part/db :db.alter/attribute :attribute] will only be transacted at most once. Since older versions of Datomic required the :db.alter/attribute flag, I can only imagine they either never wrote :db.alter/attribute to the store, or they handled it specially. I'll need to remove the marker flag system from Mentat in order to address this fundamental limitation. * Post: Remove some more single quotes from error output. * Post: Add assert_transact! macro to unwrap safely. I was finding it very difficult to track unwrapping errors while making changes, due to an underlying Mac OS X symbolication issue that makes running tests with RUST_BACKTRACE=1 so slow that they all time out. * Post: Don't expect or recognize :db.{install,alter}/attribute. I had this all working... except we will never see a repeated `[:db.part/db :db.alter/attribute :attribute]` assertion in the store! That means my approach would let you alter an attribute at most one time. It's not worth hacking around this; it's better to just stop expecting (and recognizing) the marker flags. (We have all the data to distinguish the various cases that we need without the marker flags.) This brings Mentat in line with the thrust of newer Datomic versions, but isn't compatible with Datomic, because (if I understand correctly) Datomic automatically adds :db.{install,alter}/attribute assertions to transactions. I haven't purged the corresponding :db/ident and schema fragments just yet: - we might want them back - we might want them in order to upgrade v1 and v2 databases to the new on-disk layout we're fleshing out (v3?). * Post: Don't make :db/unique :db.unique/* imply :db/index true. This patch avoids a potential bug with the "schema" materialized view. If :db/unique :db.unique/value implies :db/index true, then what happens when you _retract_ :db.unique/value? I think Datomic defines this in some way, but I really want the "schema" materialized view to be a slice of "datoms" and not have these sort of ambiguities and persistent effects. Therefore, to ensure that we don't retract a schema characteristic and accidentally change more than we intended to, this patch stops having any schema characteristic imply any other schema characteristic(s). To achieve that, I added an Option<Unique::{Value,Identity}> type to Attribute; this helps with this patch, and also looks ahead to when we allow to retract :db/unique attributes. * Post: Allow to retract :db/ident. * Post: Include more details about invalid schema changes. The tests use strings, so they hide the chained errors which do in fact provide more detail. * Review comment: Fix outdated comment. * Review comment: s/_SET/_SQL_LIST/. * Review comment: Use a sub-select for checking cardinality. This might be faster in practice. * Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 20:18:59 +00:00
:db/index true
:db/unique :db.unique/value
:db/cardinality :db.cardinality/many}}"#;
edn::parse::value(s)
.map(|v| v.without_spans())
.map_err(|_| ErrorKind::BadBootstrapDefinition("Unable to parse V1_SYMBOLIC_SCHEMA".into()))
.unwrap()
};
}
/// Convert (ident, entid) pairs into [:db/add IDENT :db/ident IDENT] `Value` instances.
fn idents_to_assertions(idents: &[(symbols::NamespacedKeyword, i64)]) -> Vec<Value> {
idents
.into_iter()
.map(|&(ref ident, _)| {
let value = Value::NamespacedKeyword(ident.clone());
Value::Vector(vec![values::DB_ADD.clone(), value.clone(), values::DB_IDENT.clone(), value.clone()])
})
.collect()
}
/// Convert an ident list into [:db/add :db.schema/core :db.schema/attribute IDENT] `Value` instances.
fn schema_attrs_to_assertions(version: u32, idents: &[symbols::NamespacedKeyword]) -> Vec<Value> {
let schema_core = Value::NamespacedKeyword(ns_keyword!("db.schema", "core"));
let schema_attr = Value::NamespacedKeyword(ns_keyword!("db.schema", "attribute"));
let schema_version = Value::NamespacedKeyword(ns_keyword!("db.schema", "version"));
idents
.into_iter()
.map(|ident| {
let value = Value::NamespacedKeyword(ident.clone());
Value::Vector(vec![values::DB_ADD.clone(),
schema_core.clone(),
schema_attr.clone(),
value])
})
.chain(::std::iter::once(Value::Vector(vec![values::DB_ADD.clone(),
schema_core.clone(),
schema_version,
Value::Integer(version as i64)])))
.collect()
}
/// Convert {:ident {:key :value ...} ...} to
/// vec![(symbols::NamespacedKeyword(:ident), symbols::NamespacedKeyword(:key), TypedValue(:value)), ...].
///
Schema alteration. Fixes #294 and #295. (#370) r=rnewman * Pre: Don't retract :db/ident in test. Datomic (and eventually Mentat) don't allow to retract :db/ident in this way, so this runs afoul of future work to support mutating metadata. * Pre: s/VALUETYPE/VALUE_TYPE/. This is consistent with the capitalization (which is "valueType") and the other identifier. * Pre: Remove some single quotes from error output. * Part 1: Make materialized views be uniform [e a v value_type_tag]. This looks ahead to a time when we could support arbitrary user-defined materialized views. For now, the "idents" materialized view is those datoms of the form [e :db/ident :namespaced/keyword] and the "schema" materialized view is those datoms of the form [e a v] where a is in a particular set of attributes that will become clear in the following commits. This change is not backwards compatible, so I'm removing the open current (really, v2) test. It'll be re-instated when we get to https://github.com/mozilla/mentat/issues/194. * Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output. * Part 3: Separate `schema_to_mutate` from the `schema` used to interpret. This is just to keep track of the expected changes during bootstrapping. I want bootstrap metadata mutations to flow through the same code path as metadata mutations during regular transactions; by differentiating the schema used for interpretation from the schema that will be updated I expect to be able to apply bootstrap metadata mutations to an empty schema and have things like materialized views created (using the regular code paths). This commit has been re-ordered for conceptual clarity, but it won't compile because it references the metadata module. It's possible to make it compile -- the functionality is there in the schema module -- but it's not worth the rebasing effort until after review (and possibly not even then, since we'll squash down to a single commit to land). * Part 2: Maintain entids separately from idents. In order to support historical idents, we need to distinguish the "current" map from entid -> ident from the "complete historical" map ident -> entid. This is what Datomic does; in Datomic, an ident is never retracted (although it can be replaced). This approach is an important part of allowing multiple consumers to share a schema fragment as it migrates forward. This fixes a limitation of the Clojure implementation, which did not handle historical idents across knowledge base close and re-open. The "entids" materialized view is naturally a slice of the "datoms" table. The "idents" materialized view is a slice of the "transactions" table. I hope that representing in this way, and casting the problem in this light, might generalize to future materialized views. * Pre: Add DiffSet. * Part 4: Collect mutations to a `Schema`. I haven't taken your review comment about consuming AttributeBuilder during each fluent function. If you read my response and still want this, I'm happy to do it in review. * Part 5: Handle :db/ident and :db.{install,alter}/attribute. This "loops" the committed datoms out of the SQL store and back through the metadata (schema, but in future also partition map) processor. The metadata processor updates the schema and produces a report of what changed; that report is then used to update the SQL store. That update includes: - the materialized views ("entids", "idents", and "schema"); - if needed, a subset of the datoms themselves (as flags change). I've left a TODO for handling attribute retraction in the cases that it makes sense. I expect that to be straight-forward. * Review comment: Rename DiffSet to AddRetractAlterSet. Also adds a little more commentary and a simple test. * Review comment: Use ToIdent trait. * Review comment: partially revert "Part 2: Maintain entids separately from idents." This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401. Following our discussion, this removes the "entids" materialized view. The next commit will remove historical idents from the "idents" materialized view. * Post: Use custom Either rather than std::result::Result. This is not necessary, but it was suggested that we might be paying an overhead creating Err instances while using error_chain. That seems not to be the case, but this change shows that we don't actually use any of the Result helper methods, so there's no reason to overload Result. This change might avoid some future confusion, so I'm going to land it anyway. Signed-off-by: Nick Alexander <nalexander@mozilla.com> * Review comment: Don't preserve historical idents. * Review comment: More prepared statements when updating materialized views. * Post: Test altering :db/cardinality and :db/unique. These tests fail due to a Datomic limitation, namely that the marker flag :db.alter/attribute can only be asserted once for an attribute! That is, [:db.part/db :db.alter/attribute :attribute] will only be transacted at most once. Since older versions of Datomic required the :db.alter/attribute flag, I can only imagine they either never wrote :db.alter/attribute to the store, or they handled it specially. I'll need to remove the marker flag system from Mentat in order to address this fundamental limitation. * Post: Remove some more single quotes from error output. * Post: Add assert_transact! macro to unwrap safely. I was finding it very difficult to track unwrapping errors while making changes, due to an underlying Mac OS X symbolication issue that makes running tests with RUST_BACKTRACE=1 so slow that they all time out. * Post: Don't expect or recognize :db.{install,alter}/attribute. I had this all working... except we will never see a repeated `[:db.part/db :db.alter/attribute :attribute]` assertion in the store! That means my approach would let you alter an attribute at most one time. It's not worth hacking around this; it's better to just stop expecting (and recognizing) the marker flags. (We have all the data to distinguish the various cases that we need without the marker flags.) This brings Mentat in line with the thrust of newer Datomic versions, but isn't compatible with Datomic, because (if I understand correctly) Datomic automatically adds :db.{install,alter}/attribute assertions to transactions. I haven't purged the corresponding :db/ident and schema fragments just yet: - we might want them back - we might want them in order to upgrade v1 and v2 databases to the new on-disk layout we're fleshing out (v3?). * Post: Don't make :db/unique :db.unique/* imply :db/index true. This patch avoids a potential bug with the "schema" materialized view. If :db/unique :db.unique/value implies :db/index true, then what happens when you _retract_ :db.unique/value? I think Datomic defines this in some way, but I really want the "schema" materialized view to be a slice of "datoms" and not have these sort of ambiguities and persistent effects. Therefore, to ensure that we don't retract a schema characteristic and accidentally change more than we intended to, this patch stops having any schema characteristic imply any other schema characteristic(s). To achieve that, I added an Option<Unique::{Value,Identity}> type to Attribute; this helps with this patch, and also looks ahead to when we allow to retract :db/unique attributes. * Post: Allow to retract :db/ident. * Post: Include more details about invalid schema changes. The tests use strings, so they hide the chained errors which do in fact provide more detail. * Review comment: Fix outdated comment. * Review comment: s/_SET/_SQL_LIST/. * Review comment: Use a sub-select for checking cardinality. This might be faster in practice. * Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 20:18:59 +00:00
/// Such triples are closer to what the transactor will produce when processing attribute
/// assertions.
fn symbolic_schema_to_triples(ident_map: &IdentMap, symbolic_schema: &Value) -> Result<Vec<(symbols::NamespacedKeyword, symbols::NamespacedKeyword, TypedValue)>> {
// Failure here is a coding error, not a runtime error.
let mut triples: Vec<(symbols::NamespacedKeyword, symbols::NamespacedKeyword, TypedValue)> = vec![];
// TODO: Consider `flat_map` and `map` rather than loop.
match *symbolic_schema {
Value::Map(ref m) => {
for (ident, mp) in m {
let ident = match ident {
&Value::NamespacedKeyword(ref ident) => ident,
_ => bail!(ErrorKind::BadBootstrapDefinition(format!("Expected namespaced keyword for ident but got '{:?}'", ident)))
};
match *mp {
Value::Map(ref mpp) => {
for (attr, value) in mpp {
let attr = match attr {
&Value::NamespacedKeyword(ref attr) => attr,
_ => bail!(ErrorKind::BadBootstrapDefinition(format!("Expected namespaced keyword for attr but got '{:?}'", attr)))
};
// We have symbolic idents but the transactor handles entids. Ad-hoc
// convert right here. This is a fundamental limitation on the
// bootstrap symbolic schema format; we can't represent "real" keywords
// at this time.
//
// TODO: remove this limitation, perhaps by including a type tag in the
// bootstrap symbolic schema, or by representing the initial bootstrap
// schema directly as Rust data.
let typed_value = match TypedValue::from_edn_value(value) {
Some(TypedValue::Keyword(ref k)) => {
ident_map.get(k)
.map(|entid| TypedValue::Ref(*entid))
.ok_or(ErrorKind::UnrecognizedIdent(k.to_string()))?
},
Some(v) => v,
_ => bail!(ErrorKind::BadBootstrapDefinition(format!("Expected Mentat typed value for value but got '{:?}'", value)))
};
triples.push((ident.clone(), attr.clone(), typed_value));
}
},
_ => bail!(ErrorKind::BadBootstrapDefinition("Expected {:db/ident {:db/attr value ...} ...}".into()))
}
}
},
_ => bail!(ErrorKind::BadBootstrapDefinition("Expected {...}".into()))
}
Ok(triples)
}
/// Convert {IDENT {:key :value ...} ...} to [[:db/add IDENT :key :value] ...].
fn symbolic_schema_to_assertions(symbolic_schema: &Value) -> Result<Vec<Value>> {
// Failure here is a coding error, not a runtime error.
let mut assertions: Vec<Value> = vec![];
match *symbolic_schema {
Value::Map(ref m) => {
for (ident, mp) in m {
match *mp {
Value::Map(ref mpp) => {
for (attr, value) in mpp {
assertions.push(Value::Vector(vec![values::DB_ADD.clone(),
ident.clone(),
attr.clone(),
value.clone()]));
}
},
_ => bail!(ErrorKind::BadBootstrapDefinition("Expected {:db/ident {:db/attr value ...} ...}".into()))
}
}
},
_ => bail!(ErrorKind::BadBootstrapDefinition("Expected {...}".into()))
}
Ok(assertions)
}
pub(crate) fn bootstrap_partition_map() -> PartitionMap {
V1_PARTS.iter()
.map(|&(ref part, start, index)| (part.to_string(), Partition::new(start, index)))
.collect()
}
pub(crate) fn bootstrap_ident_map() -> IdentMap {
V1_IDENTS.iter()
.map(|&(ref ident, entid)| (ident.clone(), entid))
.collect()
}
pub(crate) fn bootstrap_schema() -> Schema {
let ident_map = bootstrap_ident_map();
let bootstrap_triples = symbolic_schema_to_triples(&ident_map, &V1_SYMBOLIC_SCHEMA).unwrap();
Schema::from_ident_map_and_triples(ident_map, bootstrap_triples).unwrap()
}
pub(crate) fn bootstrap_entities() -> Vec<Entity> {
let bootstrap_assertions: Value = Value::Vector([
symbolic_schema_to_assertions(&V1_SYMBOLIC_SCHEMA).unwrap(),
idents_to_assertions(&V1_IDENTS[..]),
schema_attrs_to_assertions(CORE_SCHEMA_VERSION, V1_CORE_SCHEMA.as_ref()),
].concat());
// Failure here is a coding error (since the inputs are fixed), not a runtime error.
// TODO: represent these bootstrap data errors rather than just panicing.
let bootstrap_entities: Vec<Entity> = mentat_tx_parser::Tx::parse(&bootstrap_assertions.with_spans()).unwrap();
return bootstrap_entities;
}