mentat/public-traits/errors.rs

234 lines
6.2 KiB
Rust
Raw Normal View History

Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341) * Pre: Drop unneeded tx0 from search results. * Pre: Don't require a schema in some of the DB code. The idea is to separate the transaction applying code, which is schema-aware, from the concrete storage code, which is just concerned with getting bits onto disk. * Pre: Only reference Schema, not DB, in debug module. This is part of a larger separation of the volatile PartitionMap, which is modified every transaction, from the stable Schema, which is infrequently modified. * Pre: Fix indentation. * Extract part of DB to new SchemaTypeChecking trait. * Extract part of DB to new PartitionMapping trait. * Pre: Don't expect :db.part/tx partition to advance when tx fails. This fails right now, because we allocate tx IDs even when we shouldn't. * Sketch a db interface without DB. * Add ValueParseError; use error-chain in tx-parser. This can be simplified when https://github.com/Marwes/combine/issues/86 makes it to a published release, but this unblocks us for now. This converts the `combine` error type `ParseError<&'a [edn::Value]>` to a type with owned `Vec<edn::Value>` collections, re-using `edn::Value::Vector` for making them `Display`. * Pre: Accept Borrow<Schema> instead of just &Schema in debug module. This makes it easy to use Rc<Schema> or Arc<Schema> without inserting &* sigils throughout the code. * Use error-chain in query-parser. There are a few things to point out here: - the fine grained error types have been flattened into one crate-wide error type; it's pretty easy to regain the granularity as needed. - edn::ParseError is automatically lifted to mentat_query_parser::errors::Error; - we use mentat_parser_utils::ValueParser to maintain parsing error information from `combine`. * Patch up top-level. * Review comment: Only `borrow()` once.
2017-02-24 23:32:41 +00:00
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
2018-06-04 22:07:09 +00:00
use std; // To refer to std::result::Result.
use std::collections::BTreeSet;
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
use std::error::Error;
use rusqlite;
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
use uuid;
use edn;
2018-06-04 22:07:09 +00:00
use core_traits::{
Attribute,
ValueType,
};
2018-06-04 22:07:09 +00:00
use db_traits::errors::{
DbError,
};
use query_algebrizer_traits::errors::{
AlgebrizerError,
};
use query_projector_traits::errors::{
ProjectorError,
};
use query_pull_traits::errors::{
PullError,
};
2018-08-08 20:19:08 +00:00
use sql_traits::errors::{
SQLError,
};
#[cfg(feature = "syncable")]
use tolstoy_traits::errors::{
TolstoyError,
};
2018-06-04 22:07:09 +00:00
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
#[cfg(feature = "syncable")]
use hyper;
#[cfg(feature = "syncable")]
use serde_json;
pub type Result<T> = std::result::Result<T, MentatError>;
2018-06-04 22:07:09 +00:00
#[derive(Debug, Fail)]
pub enum MentatError {
#[fail(display = "bad uuid {}", _0)]
BadUuid(String),
2018-06-04 22:07:09 +00:00
#[fail(display = "path {} already exists", _0)]
PathAlreadyExists(String),
#[fail(display = "variables {:?} unbound at query execution time", _0)]
UnboundVariables(BTreeSet<String>),
#[fail(display = "invalid argument name: '{}'", _0)]
InvalidArgumentName(String),
#[fail(display = "unknown attribute: '{}'", _0)]
UnknownAttribute(String),
#[fail(display = "invalid vocabulary version")]
InvalidVocabularyVersion,
#[fail(display = "vocabulary {}/version {} already has attribute {}, and the requested definition differs", _0, _1, _2)]
ConflictingAttributeDefinitions(String, u32, String, Attribute, Attribute),
2018-06-04 22:07:09 +00:00
#[fail(display = "existing vocabulary {} too new: wanted version {}, got version {}", _0, _1, _2)]
ExistingVocabularyTooNew(String, u32, u32),
2018-06-04 22:07:09 +00:00
#[fail(display = "core schema: wanted version {}, got version {:?}", _0, _1)]
UnexpectedCoreSchema(u32, Option<u32>),
2018-06-04 22:07:09 +00:00
#[fail(display = "Lost the transact() race!")]
UnexpectedLostTransactRace,
#[fail(display = "missing core attribute {}", _0)]
2018-08-08 18:23:07 +00:00
MissingCoreVocabulary(edn::query::Keyword),
2018-06-04 22:07:09 +00:00
#[fail(display = "schema changed since query was prepared")]
PreparedQuerySchemaMismatch,
#[fail(display = "provided value of type {} doesn't match attribute value type {}", _0, _1)]
ValueTypeMismatch(ValueType, ValueType),
#[fail(display = "{}", _0)]
IoError(#[cause] std::io::Error),
/// We're just not done yet. Message that the feature is recognized but not yet
/// implemented.
#[fail(display = "not yet implemented: {}", _0)]
NotYetImplemented(String),
// It would be better to capture the underlying `rusqlite::Error`, but that type doesn't
// implement many useful traits, including `Clone`, `Eq`, and `PartialEq`.
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
#[fail(display = "SQL error: {}, cause: {}", _0, _1)]
RusqliteError(String, String),
#[fail(display = "{}", _0)]
EdnParseError(#[cause] edn::ParseError),
#[fail(display = "{}", _0)]
2018-08-08 17:37:59 +00:00
DbError(#[cause] DbError),
#[fail(display = "{}", _0)]
AlgebrizerError(#[cause] AlgebrizerError),
#[fail(display = "{}", _0)]
ProjectorError(#[cause] ProjectorError),
#[fail(display = "{}", _0)]
PullError(#[cause] PullError),
#[fail(display = "{}", _0)]
2018-08-08 20:19:08 +00:00
SQLError(#[cause] SQLError),
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
#[fail(display = "{}", _0)]
UuidError(#[cause] uuid::ParseError),
#[cfg(feature = "syncable")]
#[fail(display = "{}", _0)]
TolstoyError(#[cause] TolstoyError),
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
#[cfg(feature = "syncable")]
#[fail(display = "{}", _0)]
NetworkError(#[cause] hyper::Error),
#[cfg(feature = "syncable")]
#[fail(display = "{}", _0)]
UriError(#[cause] hyper::error::UriError),
#[cfg(feature = "syncable")]
#[fail(display = "{}", _0)]
SerializationError(#[cause] serde_json::Error),
}
impl From<std::io::Error> for MentatError {
fn from(error: std::io::Error) -> MentatError {
MentatError::IoError(error)
}
}
impl From<rusqlite::Error> for MentatError {
fn from(error: rusqlite::Error) -> MentatError {
2019-07-23 14:38:59 +00:00
let cause = match error.source() {
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
Some(e) => e.to_string(),
None => "".to_string()
};
MentatError::RusqliteError(error.to_string(), cause)
}
}
impl From<uuid::ParseError> for MentatError {
fn from(error: uuid::ParseError) -> MentatError {
MentatError::UuidError(error)
}
}
impl From<edn::ParseError> for MentatError {
fn from(error: edn::ParseError) -> MentatError {
MentatError::EdnParseError(error)
}
}
2018-08-08 17:37:59 +00:00
impl From<DbError> for MentatError {
fn from(error: DbError) -> MentatError {
MentatError::DbError(error)
}
}
impl From<AlgebrizerError> for MentatError {
fn from(error: AlgebrizerError) -> MentatError {
MentatError::AlgebrizerError(error)
}
}
impl From<ProjectorError> for MentatError {
fn from(error: ProjectorError) -> MentatError {
MentatError::ProjectorError(error)
}
}
impl From<PullError> for MentatError {
fn from(error: PullError) -> MentatError {
MentatError::PullError(error)
}
}
2018-08-08 20:19:08 +00:00
impl From<SQLError> for MentatError {
fn from(error: SQLError) -> MentatError {
MentatError::SQLError(error)
}
Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341) * Pre: Drop unneeded tx0 from search results. * Pre: Don't require a schema in some of the DB code. The idea is to separate the transaction applying code, which is schema-aware, from the concrete storage code, which is just concerned with getting bits onto disk. * Pre: Only reference Schema, not DB, in debug module. This is part of a larger separation of the volatile PartitionMap, which is modified every transaction, from the stable Schema, which is infrequently modified. * Pre: Fix indentation. * Extract part of DB to new SchemaTypeChecking trait. * Extract part of DB to new PartitionMapping trait. * Pre: Don't expect :db.part/tx partition to advance when tx fails. This fails right now, because we allocate tx IDs even when we shouldn't. * Sketch a db interface without DB. * Add ValueParseError; use error-chain in tx-parser. This can be simplified when https://github.com/Marwes/combine/issues/86 makes it to a published release, but this unblocks us for now. This converts the `combine` error type `ParseError<&'a [edn::Value]>` to a type with owned `Vec<edn::Value>` collections, re-using `edn::Value::Vector` for making them `Display`. * Pre: Accept Borrow<Schema> instead of just &Schema in debug module. This makes it easy to use Rc<Schema> or Arc<Schema> without inserting &* sigils throughout the code. * Use error-chain in query-parser. There are a few things to point out here: - the fine grained error types have been flattened into one crate-wide error type; it's pretty easy to regain the granularity as needed. - edn::ParseError is automatically lifted to mentat_query_parser::errors::Error; - we use mentat_parser_utils::ValueParser to maintain parsing error information from `combine`. * Patch up top-level. * Review comment: Only `borrow()` once.
2017-02-24 23:32:41 +00:00
}
#[cfg(feature = "syncable")]
impl From<TolstoyError> for MentatError {
fn from(error: TolstoyError) -> MentatError {
MentatError::TolstoyError(error)
}
}
Basic sync support (#563) r=nalexander * Pre: remove remnants of 'open_empty' * Pre: Cleanup 'datoms' table after a timeline move Since timeline move operations use a transactor, they generate a "phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense that it was never present in the 'transactions' table, and is entirely synthetic as far as our database is concerned. It's an implementational artifact, and we were not cleaning it up. It becomes a problem when we start inserting transactions after a move. Once the transactor clashes with the phantom 'tx', it will retract the phantom 'txInstant' value, leaving the transactions log in an incorrect state. This patch adds a test for this scenario and elects the easy way out: simply remove the offending 'txInstant' datom. * Part 1: Sync without support for side-effects A "side-effect" is defined here as a mutation of a remote state as part of the sync. If, during a sync we determine that a remote state needs to be changed, bail out. This generally supports different variations of "baton-passing" syncing, where clients will succeed syncing if each change is non-conflicting. * Part 2: Support basic "side-effects" syncing This patch introduces a concept of a follow-up sync. If a sync generated a "merge transaction" (a regular transaction that contains assertions necessary for local and remote transaction logs to converge), then this transaction needs to be uploaded in a follow-up sync. Generated SyncReport indicates if a follow-up sync is required. Follow-up sync itself is just a regular sync. If remote state did not change, it will result in a simple RemoteFastForward. Otherwise, we'll continue merging and requesting a follow-up. Schema alterations are explicitly not supported. As local transactions are rebased on top of remote, following changes happen: - entids are changed into tempids, letting transactor upsert :db/unique values - entids for retractions are changed into lookup-refs if we're confident they'll succeed -- otherwise, retractions are dropped on the floor * Post: use a macro for more readable tests * Tolstoy README
2018-09-08 02:18:20 +00:00
#[cfg(feature = "syncable")]
impl From<serde_json::Error> for MentatError {
fn from(error: serde_json::Error) -> MentatError {
MentatError::SerializationError(error)
}
}
#[cfg(feature = "syncable")]
impl From<hyper::Error> for MentatError {
fn from(error: hyper::Error) -> MentatError {
MentatError::NetworkError(error)
}
}
#[cfg(feature = "syncable")]
impl From<hyper::error::UriError> for MentatError {
fn from(error: hyper::error::UriError) -> MentatError {
MentatError::UriError(error)
}
}