Compare commits

..

769 commits

Author SHA1 Message Date
201ec39dd2 updates (#323)
Reviewed-on: #323
Co-authored-by: Greg Burd <greg@burd.me>
Co-committed-by: Greg Burd <greg@burd.me>
2023-12-07 20:03:13 +00:00
216f078d44 Merge pull request 'Update peg requirement from ~0.7 to ~0.8' (#311) from dependabot/cargo/peg-approx-0.8 into master
Reviewed-on: #311
2023-11-25 16:00:29 +00:00
8ab11d3503 Merge branch 'master' into dependabot/cargo/peg-approx-0.8 2023-11-25 16:00:09 +00:00
92eab3692f Merge pull request 'Update indexmap requirement from ~1.7 to ~1.9' (#316) from dependabot/cargo/indexmap-approx-1.9 into master
Reviewed-on: #316
2023-11-25 15:59:55 +00:00
02ebaf5bae Merge branch 'master' into dependabot/cargo/indexmap-approx-1.9 2023-11-25 15:59:43 +00:00
517b781da1 Merge pull request 'Update rusqlite requirement from ~0.26 to ~0.29' (#320) from dependabot/cargo/rusqlite-approx-0.29 into master
Reviewed-on: #320
2023-11-25 15:59:34 +00:00
6b269a660d Merge branch 'master' into dependabot/cargo/rusqlite-approx-0.29 2023-11-25 15:59:20 +00:00
92f400a553 Merge pull request 'Update tempfile requirement from ~3.2 to ~3.5' (#321) from dependabot/cargo/tempfile-approx-3.5 into master
Reviewed-on: #321
2023-11-25 15:58:29 +00:00
ff527ad220 Merge branch 'master' into dependabot/cargo/tempfile-approx-3.5 2023-11-25 15:57:56 +00:00
73240913cc Merge pull request 'Update pretty requirement from ~0.10 to ~0.12' (#322) from dependabot/cargo/pretty-approx-0.12 into master
Reviewed-on: #322
2023-11-25 15:57:44 +00:00
dependabot[bot]
c10575e04d
Update pretty requirement from ~0.10 to ~0.12
Updates the requirements on [pretty](https://github.com/Marwes/pretty.rs) to permit the latest version.
- [Release notes](https://github.com/Marwes/pretty.rs/releases)
- [Changelog](https://github.com/Marwes/pretty.rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Marwes/pretty.rs/compare/v0.10.0...v0.12.0)

---
updated-dependencies:
- dependency-name: pretty
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-31 10:02:08 +00:00
dependabot[bot]
5fdb9a4970
Update tempfile requirement from ~3.2 to ~3.5
Updates the requirements on [tempfile](https://github.com/Stebalien/tempfile) to permit the latest version.
- [Release notes](https://github.com/Stebalien/tempfile/releases)
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/NEWS)
- [Commits](https://github.com/Stebalien/tempfile/commits)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-29 10:02:12 +00:00
dependabot[bot]
8f226ca050
Update rusqlite requirement from ~0.26 to ~0.29
Updates the requirements on [rusqlite](https://github.com/rusqlite/rusqlite) to permit the latest version.
- [Release notes](https://github.com/rusqlite/rusqlite/releases)
- [Changelog](https://github.com/rusqlite/rusqlite/blob/master/Changelog.md)
- [Commits](https://github.com/rusqlite/rusqlite/compare/rusqlite-0.26.1...v0.29.0)

---
updated-dependencies:
- dependency-name: rusqlite
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-27 10:11:21 +00:00
dependabot[bot]
aa6b634e64
Update indexmap requirement from ~1.7 to ~1.9
Updates the requirements on [indexmap](https://github.com/bluss/indexmap) to permit the latest version.
- [Release notes](https://github.com/bluss/indexmap/releases)
- [Changelog](https://github.com/bluss/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/bluss/indexmap/compare/1.7.0...1.9.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-17 09:37:21 +00:00
986b439fb9 ignore warnings from clippy 2022-05-04 17:09:30 -04:00
dependabot[bot]
0d55e6acba
Update peg requirement from ~0.7 to ~0.8
Updates the requirements on [peg](https://github.com/kevinmehall/rust-peg) to permit the latest version.
- [Release notes](https://github.com/kevinmehall/rust-peg/releases)
- [Commits](https://github.com/kevinmehall/rust-peg/compare/0.7.0...0.8.0)

---
updated-dependencies:
- dependency-name: peg
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-04 18:47:06 +00:00
d39f8aad4e
Merge pull request #310 from qpdb/dependabot/cargo/uuid-approx-1.0
Update uuid requirement from ~0.8 to ~1.0
2022-05-04 14:45:35 -04:00
7cfff34602 Name changed, prefix 'to_' was removed.
Signed-off-by: Greg Burd <greg@burd.me>
2022-05-04 14:41:21 -04:00
dependabot[bot]
8175b98a7c
Update uuid requirement from ~0.8 to ~1.0
Updates the requirements on [uuid](https://github.com/uuid-rs/uuid) to permit the latest version.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/0.8.0...1.0.0)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-03 17:30:32 +00:00
Mark Watts
b19a994c68 resolve compile errors from rusqlite update 2021-11-10 22:17:19 -05:00
Mark Watts
9a4ba44060
Merge pull request #29 from mwatts/dependabot/cargo/rusqlite-approx-0.26
Update rusqlite requirement from ~0.25 to ~0.26
2021-11-09 22:43:40 -05:00
Mark Watts
124bf54385
Merge pull request #26 from mwatts/dependabot/cargo/ordered-float-approx-2.8
Update ordered-float requirement from ~2.7 to ~2.8
2021-11-09 22:43:08 -05:00
Mark Watts
3df00eb63a
Merge pull request #27 from mwatts/dependabot/cargo/dirs-approx-4.0
Update dirs requirement from ~3.0 to ~4.0
2021-11-09 22:42:51 -05:00
Mark Watts
8041c704dc
Merge pull request #28 from mwatts/dependabot/bundler/docs/nokogiri-1.12.5
Bump nokogiri from 1.11.7 to 1.12.5 in /docs
2021-11-09 22:42:26 -05:00
dependabot[bot]
4aa70567b8
Update rusqlite requirement from ~0.25 to ~0.26
Updates the requirements on [rusqlite](https://github.com/rusqlite/rusqlite) to permit the latest version.
- [Release notes](https://github.com/rusqlite/rusqlite/releases)
- [Changelog](https://github.com/rusqlite/rusqlite/blob/master/Changelog.md)
- [Commits](https://github.com/rusqlite/rusqlite/compare/v0.25.0...v0.26.0)

---
updated-dependencies:
- dependency-name: rusqlite
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-10-04 21:22:11 +00:00
dependabot[bot]
c9a46327bc
Bump nokogiri from 1.11.7 to 1.12.5 in /docs
Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.11.7 to 1.12.5.
- [Release notes](https://github.com/sparklemotion/nokogiri/releases)
- [Changelog](https://github.com/sparklemotion/nokogiri/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sparklemotion/nokogiri/compare/v1.11.7...v1.12.5)

---
updated-dependencies:
- dependency-name: nokogiri
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-28 05:59:06 +00:00
dependabot[bot]
d22bf451a4
Update dirs requirement from ~3.0 to ~4.0
Updates the requirements on [dirs](https://github.com/soc/dirs-rs) to permit the latest version.
- [Release notes](https://github.com/soc/dirs-rs/releases)
- [Commits](https://github.com/soc/dirs-rs/commits)

---
updated-dependencies:
- dependency-name: dirs
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-16 21:21:31 +00:00
dependabot[bot]
e73effb7d2
Update ordered-float requirement from ~2.7 to ~2.8
Updates the requirements on [ordered-float](https://github.com/reem/rust-ordered-float) to permit the latest version.
- [Release notes](https://github.com/reem/rust-ordered-float/releases)
- [Commits](https://github.com/reem/rust-ordered-float/compare/v2.7.0...v2.8.0)

---
updated-dependencies:
- dependency-name: ordered-float
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-02 21:14:10 +00:00
Mark Watts
eae76e6f43 cargo fmt 2021-08-23 21:31:50 -04:00
Mark Watts
bd818ba1f1
Merge pull request #24 from mwatts/feature/blobs
add blob/bytes as a type
2021-08-23 21:19:24 -04:00
Mark Watts
73feb622cd implement bytes (aka blobs) as native type 2021-08-23 17:25:10 -04:00
Mark Watts
d3821432bc fix problem parsing entities
issue with how bytes are not a collection -> bytes not correctly viewed as atoms
2021-08-23 17:23:09 -04:00
Mark Watts
179c123061 fix panic macro use 2021-08-23 17:21:51 -04:00
Mark Watts
1500d4348c add blobs via #bytes to edn 2021-08-22 17:41:50 -04:00
Mark Watts
479fbc4572
Merge pull request #22 from mwatts/dependabot/cargo/time-0.3.1
Update time requirement from 0.2.15 to 0.3.1
2021-08-22 17:17:11 -04:00
dependabot[bot]
97628a251f
Update time requirement from 0.2.15 to 0.3.1
Updates the requirements on [time](https://github.com/time-rs/time) to permit the latest version.
- [Release notes](https://github.com/time-rs/time/releases)
- [Changelog](https://github.com/time-rs/time/blob/main/CHANGELOG.md)
- [Commits](https://github.com/time-rs/time/compare/v0.2.15...v0.3.1)

---
updated-dependencies:
- dependency-name: time
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-22 21:12:22 +00:00
Mark Watts
903ac24589
Merge pull request #18 from mwatts/dependabot/cargo/ordered-float-approx-2.7
Update ordered-float requirement from ~2.5 to ~2.7
2021-08-22 17:11:01 -04:00
Mark Watts
1f6620bf87
Merge pull request #16 from mwatts/dependabot/cargo/petgraph-approx-0.6
Update petgraph requirement from ~0.5 to ~0.6
2021-08-22 17:10:17 -04:00
Mark Watts
e64e2cf2f2
Merge pull request #23 from mwatts/feature/blobs
remove warnings about Itertools::intersperse
2021-08-22 17:09:42 -04:00
Mark Watts
08694dc45a remove warnings about Itertools::intersperse 2021-08-22 16:53:29 -04:00
Mark Watts
64bb6284d0
Merge pull request #20 from mwatts/dependabot/cargo/env_logger-approx-0.9
Update env_logger requirement from ~0.8 to ~0.9
2021-07-17 17:05:56 -04:00
Mark Watts
5f376a8664
Merge pull request #19 from mwatts/dependabot/bundler/docs/addressable-2.8.0
Bump addressable from 2.7.0 to 2.8.0 in /docs
2021-07-17 16:49:04 -04:00
dependabot[bot]
ad3d7157a5
Update env_logger requirement from ~0.8 to ~0.9
Updates the requirements on [env_logger](https://github.com/env-logger-rs/env_logger) to permit the latest version.
- [Release notes](https://github.com/env-logger-rs/env_logger/releases)
- [Changelog](https://github.com/env-logger-rs/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/env-logger-rs/env_logger/compare/v0.8.0...v0.9.0)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-14 21:21:11 +00:00
dependabot[bot]
46ddac347e
Bump addressable from 2.7.0 to 2.8.0 in /docs
Bumps [addressable](https://github.com/sporkmonger/addressable) from 2.7.0 to 2.8.0.
- [Release notes](https://github.com/sporkmonger/addressable/releases)
- [Changelog](https://github.com/sporkmonger/addressable/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sporkmonger/addressable/compare/addressable-2.7.0...addressable-2.8.0)

---
updated-dependencies:
- dependency-name: addressable
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-13 18:24:45 +00:00
dependabot[bot]
fba46fb1f2
Update ordered-float requirement from ~2.5 to ~2.7
Updates the requirements on [ordered-float](https://github.com/reem/rust-ordered-float) to permit the latest version.
- [Release notes](https://github.com/reem/rust-ordered-float/releases)
- [Commits](https://github.com/reem/rust-ordered-float/compare/v2.5.0...v2.7.0)

---
updated-dependencies:
- dependency-name: ordered-float
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-12 21:20:06 +00:00
dependabot[bot]
071a916981
Update petgraph requirement from ~0.5 to ~0.6
Updates the requirements on [petgraph](https://github.com/petgraph/petgraph) to permit the latest version.
- [Release notes](https://github.com/petgraph/petgraph/releases)
- [Changelog](https://github.com/petgraph/petgraph/blob/master/RELEASES.rst)
- [Commits](https://github.com/petgraph/petgraph/compare/0.5.0...0.6.0)

---
updated-dependencies:
- dependency-name: petgraph
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-05 21:18:06 +00:00
Mark Watts
d4736a83e4 warnings cleanup 2021-07-02 20:39:02 -04:00
Mark Watts
15df38fc8f update rusqlite - all tests pass - some warnings 2021-07-02 20:29:41 -04:00
Mark Watts
614ce63e2b
Merge pull request #13 from mwatts/dependabot/cargo/itertools-approx-0.10
Update itertools requirement from ~0.9 to ~0.10
2021-07-02 19:44:34 -04:00
Mark Watts
5a7caf7488 more package updates; all tests pass 2021-07-02 18:09:07 -04:00
Mark Watts
a02570fd5e
Merge pull request #14 from mwatts/dependabot/cargo/peg-approx-0.7
Update peg requirement from ~0.6 to ~0.7
2021-07-02 17:52:18 -04:00
Mark Watts
4ec3c3cddc
Merge pull request #15 from mwatts/dependabot/cargo/tokio-approx-1.8
Update tokio requirement from ~0.2 to ~1.8
2021-07-02 17:51:58 -04:00
dependabot[bot]
8e8e7b9739
Update itertools requirement from ~0.9 to ~0.10
Updates the requirements on [itertools](https://github.com/rust-itertools/itertools) to permit the latest version.
- [Release notes](https://github.com/rust-itertools/itertools/releases)
- [Changelog](https://github.com/rust-itertools/itertools/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-itertools/itertools/compare/v0.9.0...v0.10.1)

---
updated-dependencies:
- dependency-name: itertools
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:50:01 +00:00
Mark Watts
abcdad5976
Merge pull request #11 from mwatts/dependabot/cargo/hyper-tls-approx-0.5
Update hyper-tls requirement from ~0.4 to ~0.5
2021-07-02 17:49:02 -04:00
dependabot[bot]
9d4f328af1
Update tokio requirement from ~0.2 to ~1.8
Updates the requirements on [tokio](https://github.com/tokio-rs/tokio) to permit the latest version.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-0.2.0...tokio-1.8.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:49:02 +00:00
dependabot[bot]
f918dcd915
Update peg requirement from ~0.6 to ~0.7
Updates the requirements on [peg](https://github.com/kevinmehall/rust-peg) to permit the latest version.
- [Release notes](https://github.com/kevinmehall/rust-peg/releases)
- [Commits](https://github.com/kevinmehall/rust-peg/compare/0.6.0...0.7.0)

---
updated-dependencies:
- dependency-name: peg
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:48:51 +00:00
Mark Watts
7185d5ee13
Merge pull request #12 from mwatts/dependabot/cargo/indexmap-approx-1.7
Update indexmap requirement from ~1.5 to ~1.7
2021-07-02 17:48:48 -04:00
Mark Watts
c8c7dda27a
Merge pull request #8 from mwatts/dependabot/cargo/num-approx-0.4
Update num requirement from ~0.3 to ~0.4
2021-07-02 17:47:48 -04:00
Mark Watts
2f299fde6c update gitignore to ignore doc related files 2021-07-02 17:44:41 -04:00
Mark Watts
3a62dbc122 add packages to workspace 2021-07-02 17:44:13 -04:00
Mark Watts
0d79eeed8f update Gemfile.lock 2021-07-02 17:40:14 -04:00
dependabot[bot]
ca9d8c0096
Update hyper-tls requirement from ~0.4 to ~0.5
Updates the requirements on [hyper-tls](https://github.com/hyperium/hyper-tls) to permit the latest version.
- [Release notes](https://github.com/hyperium/hyper-tls/releases)
- [Commits](https://github.com/hyperium/hyper-tls/compare/v0.4.0...v0.5.0)

---
updated-dependencies:
- dependency-name: hyper-tls
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:30:48 +00:00
dependabot[bot]
ffaba698e0
Update indexmap requirement from ~1.5 to ~1.7
Updates the requirements on [indexmap](https://github.com/bluss/indexmap) to permit the latest version.
- [Release notes](https://github.com/bluss/indexmap/releases)
- [Commits](https://github.com/bluss/indexmap/compare/1.5.0...1.7.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:30:48 +00:00
Mark Watts
8446a1bc4a
Merge pull request #7 from mwatts/dependabot/cargo/hyper-approx-0.14
Update hyper requirement from ~0.13 to ~0.14
2021-07-02 17:29:13 -04:00
dependabot[bot]
722f7fb782
Update num requirement from ~0.3 to ~0.4
Updates the requirements on [num](https://github.com/rust-num/num) to permit the latest version.
- [Release notes](https://github.com/rust-num/num/releases)
- [Changelog](https://github.com/rust-num/num/blob/master/RELEASES.md)
- [Commits](https://github.com/rust-num/num/compare/num-0.3.0...num-0.4.0)

---
updated-dependencies:
- dependency-name: num
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:29:09 +00:00
Mark Watts
75b5a66a91
Merge pull request #10 from mwatts/dependabot/cargo/env_logger-0.8
Update env_logger requirement from 0.7 to 0.8
2021-07-02 17:27:57 -04:00
Mark Watts
ac532be358
Merge pull request #9 from mwatts/dependabot/cargo/ordered-float-approx-2.5
Update ordered-float requirement from ~2.0 to ~2.5
2021-07-02 17:26:35 -04:00
dependabot[bot]
44036160d0
Update env_logger requirement from 0.7 to 0.8
Updates the requirements on [env_logger](https://github.com/env-logger-rs/env_logger) to permit the latest version.
- [Release notes](https://github.com/env-logger-rs/env_logger/releases)
- [Changelog](https://github.com/env-logger-rs/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/env-logger-rs/env_logger/commits/v0.8.4)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:21:18 +00:00
dependabot[bot]
c8c1363b14
Update ordered-float requirement from ~2.0 to ~2.5
Updates the requirements on [ordered-float](https://github.com/reem/rust-ordered-float) to permit the latest version.
- [Release notes](https://github.com/reem/rust-ordered-float/releases)
- [Commits](https://github.com/reem/rust-ordered-float/compare/v2.0.0...v2.5.1)

---
updated-dependencies:
- dependency-name: ordered-float
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:21:07 +00:00
dependabot[bot]
32ce6d2129
Update hyper requirement from ~0.13 to ~0.14
Updates the requirements on [hyper](https://github.com/hyperium/hyper) to permit the latest version.
- [Release notes](https://github.com/hyperium/hyper/releases)
- [Changelog](https://github.com/hyperium/hyper/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper/compare/v0.13.0...v0.14.9)

---
updated-dependencies:
- dependency-name: hyper
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 21:20:42 +00:00
Mark Watts
380945a655 update Gemfile.lock 2021-07-02 17:18:18 -04:00
Mark Watts
af9bb1fcfe add packages to workspace 2021-07-02 17:11:28 -04:00
Mark Watts
c295d82872
Merge pull request #5 from mwatts/dependabot/cargo/combine-approx-4.6
Update combine requirement from ~4.3 to ~4.6
2021-07-02 10:11:50 -04:00
Mark Watts
c2e39eeb5c
Merge pull request #4 from mwatts/dependabot/cargo/rustc_version-approx-0.4
Update rustc_version requirement from ~0.3 to ~0.4
2021-07-02 10:11:26 -04:00
Mark Watts
985fd0bbdf
Merge pull request #3 from mwatts/dependabot/cargo/tempfile-approx-3.2
Update tempfile requirement from ~3.1 to ~3.2
2021-07-02 10:10:59 -04:00
dependabot[bot]
c02c06ce2b
Update combine requirement from ~4.3 to ~4.6
Updates the requirements on [combine](https://github.com/Marwes/combine) to permit the latest version.
- [Release notes](https://github.com/Marwes/combine/releases)
- [Changelog](https://github.com/Marwes/combine/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Marwes/combine/compare/v4.3.0...v4.6.0)

---
updated-dependencies:
- dependency-name: combine
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 14:06:31 +00:00
dependabot[bot]
b138c7e257
Update rustc_version requirement from ~0.3 to ~0.4
Updates the requirements on [rustc_version](https://github.com/Kimundi/rustc-version-rs) to permit the latest version.
- [Release notes](https://github.com/Kimundi/rustc-version-rs/releases)
- [Commits](https://github.com/Kimundi/rustc-version-rs/compare/v0.3.0...v0.4.0)

---
updated-dependencies:
- dependency-name: rustc_version
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 14:06:12 +00:00
dependabot[bot]
88df3c4d8d
Update tempfile requirement from ~3.1 to ~3.2
Updates the requirements on [tempfile](https://github.com/Stebalien/tempfile) to permit the latest version.
- [Release notes](https://github.com/Stebalien/tempfile/releases)
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/NEWS)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.1.0...v3.2.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 14:06:01 +00:00
Mark Watts
feb9665299
Create dependabot.yml 2021-07-02 10:04:44 -04:00
Mark Watts
19cb2870da
fix invalid cron in audit 2021-07-02 08:55:38 -04:00
Mark Watts
5c2a7261a1
Merge pull request #2 from mwatts/dependabot/bundler/docs/nokogiri-1.11.7
Bump nokogiri from 1.8.3 to 1.11.7 in /docs
2021-07-02 08:50:52 -04:00
Mark Watts
0f015b2f10
Merge pull request #1 from mwatts/dependabot/bundler/docs/rubyzip-2.3.0
Bump rubyzip from 1.2.1 to 2.3.0 in /docs
2021-07-02 08:40:26 -04:00
dependabot[bot]
da89cfc797
Bump nokogiri from 1.8.3 to 1.11.7 in /docs
Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.8.3 to 1.11.7.
- [Release notes](https://github.com/sparklemotion/nokogiri/releases)
- [Changelog](https://github.com/sparklemotion/nokogiri/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sparklemotion/nokogiri/compare/v1.8.3...v1.11.7)

---
updated-dependencies:
- dependency-name: nokogiri
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 12:37:59 +00:00
dependabot[bot]
9a6ae48d8e
Bump rubyzip from 1.2.1 to 2.3.0 in /docs
Bumps [rubyzip](https://github.com/rubyzip/rubyzip) from 1.2.1 to 2.3.0.
- [Release notes](https://github.com/rubyzip/rubyzip/releases)
- [Changelog](https://github.com/rubyzip/rubyzip/blob/master/Changelog.md)
- [Commits](https://github.com/rubyzip/rubyzip/compare/v1.2.1...v2.3.0)

---
updated-dependencies:
- dependency-name: rubyzip
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 12:37:57 +00:00
Mark Watts
d97e882a4a
remove windows temporarily 2020-11-17 21:15:42 -05:00
5a65cd38c9 Start integrating GitHub Actions for CI. 2020-10-29 16:02:30 -04:00
5e700133f5 Use new call for single character push. 2020-10-29 16:02:02 -04:00
4a63ca98df Add a few more versions. 2020-10-29 16:01:01 -04:00
2e28e87af8 A few minor fixes. 2020-10-29 15:59:31 -04:00
5998ef73fb Take 3, a potential fix for CI/CD issues. 2020-10-29 14:05:10 -04:00
9bcd0955ba
Merge pull request #298 from qpdb/dependabot/cargo/combine-approx-4.3
Update combine requirement from ~4.2 to ~4.3
2020-10-29 14:04:31 -04:00
39219af1ff
Merge pull request #300 from qpdb/dependabot/cargo/rustc_version-approx-0.3
Update rustc_version requirement from ~0.2 to ~0.3
2020-10-29 14:03:53 -04:00
6d88abfb44
Merge pull request #299 from qpdb/dependabot/cargo/env_logger-approx-0.8
Update env_logger requirement from ~0.7 to ~0.8
2020-10-29 14:03:05 -04:00
dependabot-preview[bot]
31ec02afd3
Update rustc_version requirement from ~0.2 to ~0.3
Updates the requirements on [rustc_version](https://github.com/Kimundi/rustc-version-rs) to permit the latest version.
- [Release notes](https://github.com/Kimundi/rustc-version-rs/releases)
- [Commits](https://github.com/Kimundi/rustc-version-rs/commits)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-10-29 10:29:13 +00:00
dependabot-preview[bot]
1622978acf
Update env_logger requirement from ~0.7 to ~0.8
Updates the requirements on [env_logger](https://github.com/env-logger-rs/env_logger) to permit the latest version.
- [Release notes](https://github.com/env-logger-rs/env_logger/releases)
- [Changelog](https://github.com/env-logger-rs/env_logger/blob/master/CHANGELOG.md)
- [Commits](https://github.com/env-logger-rs/env_logger/compare/v0.7.0...v0.8.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-10-19 10:28:38 +00:00
dependabot-preview[bot]
26cd399e3a
Update combine requirement from ~4.2 to ~4.3
Updates the requirements on [combine](https://github.com/Marwes/combine) to permit the latest version.
- [Release notes](https://github.com/Marwes/combine/releases)
- [Changelog](https://github.com/Marwes/combine/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Marwes/combine/compare/v4.2.0...v4.3.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-08-27 10:35:57 +00:00
949386a43f Hopefully fix CI/CD issue with clippy install. 2020-08-26 10:05:32 -04:00
5b0cb80b32 Fix CI/CD issue with clippy install. 2020-08-26 08:46:31 -04:00
8039183097 Update to newer tokio. 2020-08-24 21:49:16 -04:00
9c472eff41
Merge pull request #297 from qpdb/dependabot/cargo/rusqlite-approx-0.24
Update rusqlite requirement from ~0.23 to ~0.24
2020-08-24 16:46:25 -04:00
324929a02a Update all uses of rusqlite to 0.24 2020-08-24 16:41:17 -04:00
dependabot-preview[bot]
526c9c3928
Update rusqlite requirement from ~0.23 to ~0.24
Updates the requirements on [rusqlite](https://github.com/rusqlite/rusqlite) to permit the latest version.
- [Release notes](https://github.com/rusqlite/rusqlite/releases)
- [Changelog](https://github.com/rusqlite/rusqlite/blob/master/Changelog.md)
- [Commits](https://github.com/rusqlite/rusqlite/compare/0.23.0...0.23.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-08-24 10:29:49 +00:00
4b1583473e Fix more issues identified by clippy (aka: lint). 2020-08-07 09:15:36 -04:00
125306e108 Update dependencies. Lint. 2020-08-05 23:03:58 -04:00
0e63167aab Update TravisCI. 2020-05-25 11:18:42 -04:00
5899bf8624 Minor version adjustments and fixes. 2020-05-25 10:51:22 -04:00
bf1ac14d32 Update dependency versions. Fix minor warnings. 2020-05-12 10:21:51 -04:00
b428579865 Update dependencies, Rust version 1.44.0-nightly and fix warnings. 2020-04-23 12:23:12 -04:00
9eb6bc6220 Add an example, more notes. 2020-02-27 12:09:11 -05:00
41f1ff2393 Box the SelectQuery in ProjectedSelect. 2020-02-27 11:18:13 -05:00
5979fa5844 Starting points for makefile and some general notes. 2020-02-27 09:27:07 -05:00
dfb5866174 lint 2020-02-21 10:27:39 -05:00
58e06742fd lint 2020-02-21 09:53:40 -05:00
a8223d11c9 Box the ConjoiningClauses in the enum ComputedTable to lower the size of that struct. 2020-02-20 12:16:21 -05:00
b41bcf40f3 Cleanup. 2020-02-10 10:46:48 -05:00
18a0c15320
Merge pull request #3 from qpdb/gburd/learning-by-linting
lint
2020-01-31 13:59:38 -05:00
6b7343a893 Tweak CI/Travis config. 2020-01-31 13:25:00 -05:00
Greg Burd
4f81c4e15b Attempting to cleanup with clippy, rustfmt, etc.
Integrate https://github.com/mozilla/mentat/pull/806
2020-01-31 10:55:45 -05:00
8aec4048b9
Merge pull request #2 from qpdb/gburd/update-peg-dep
rust-peg parser updates
2020-01-23 11:19:25 -05:00
Greg Burd
ef1c196516 Update pretty_print dependency and fix issues. 2020-01-23 11:16:14 -05:00
Greg Burd
fcb3a9182f Fix module issue found when testing all-features. 2020-01-23 11:16:14 -05:00
Greg Burd
9421a5c3bb Fixes some mistakes when updating the grammar. 2020-01-23 11:16:14 -05:00
Greg Burd
60c65033b2 Specify dependency versions without patch component unless necessary. 2020-01-23 11:15:49 -05:00
Greg Burd
fa3091d078 Update indexmap dependency. 2020-01-16 13:30:29 -05:00
286155a18a
Merge pull request #1 from qpdb/gburd/2018edition-fmt-fix-deps
Breathe life back into this project.
2020-01-16 11:27:20 -05:00
Greg Burd
d6b3d1818a Booleans should be stored as their int value, not string value. 2020-01-16 10:58:26 -05:00
Greg Burd
b2f92b8461 Update to 2018 edition of Rust (1.42). Fix and format code. Update dependencies. Fix tests. 2020-01-16 10:58:21 -05:00
Conrad Dean
c2122a210c fix compiler warnings 2019-07-23 10:38:59 -04:00
Conrad Dean
bcb56b0561 stop the docs folder from taking over every search result 2019-07-23 09:01:22 -04:00
Conrad Dean
be02b86bbe fix tolstoy tests when running "cargo test -p mentat_tolstoy" 2019-07-23 08:51:57 -04:00
Conrad Dean
e6a2af3553 fix compile errors in tests 2019-07-22 22:48:38 -04:00
Conrad Dean
2b97a90b64 not sure if this Value is needed. it started to conflict a different return type a few days ago 2019-07-22 22:31:44 -04:00
Conrad Dean
ae9f969e59 re structure result type nesting combined with correct .into call
this was a bit of a trip!  we will see if I actually did this correctly
later...
2019-07-22 22:30:08 -04:00
Conrad Dean
4d92e3eef9 the params macro fixes everything 2019-07-22 22:29:35 -04:00
Conrad Dean
3547cfcd16 fix weird params mis-matches with the params macro 2019-07-22 22:20:14 -04:00
Conrad Dean
4e9d6b3f58 help compiler with an annotation on the outside of an expression instead of halfway through the middle of an expression 2019-07-22 09:10:49 -04:00
Conrad Dean
76ae972e2e fix for rusqlite Result api 2019-07-22 08:58:19 -04:00
Conrad Dean
f4002f34f4 fix empty param type inference with macro and update for rusqlite Result api 2019-07-22 08:54:51 -04:00
Conrad Dean
5596873e8f fiddle with changes in borrow types since rusqlite changed their api 2019-07-22 08:47:23 -04:00
Conrad Dean
cdfd1f6b30 Fix raw get() api to using the Result-based api
rusqlite must have just returned the data itself rather than relying on
the Result type to communicate failures to callers.  Fixing that here,
albeit in a fragile way.
2019-07-22 08:36:32 -04:00
Conrad Dean
ff48f6369a fix breaking change on rusqlite changing RowIndex's signature
RowIndex must have just been an alias over i32, but now it's a trait
implemented on str and usize, so we need to change mentat's internal
type alias for it to a usize.
2019-07-22 08:33:34 -04:00
Conrad Dean
95780c0ab5 type signature on rusqlite::Row changed, only need one lifetime annotation 2019-07-22 07:41:53 -04:00
Conrad Dean
e3bd1cb77e iterator error was because it must return a rusqlite::Result. use rusqlite macro for empty params 2019-07-22 07:40:59 -04:00
Conrad Dean
a25f476734 remove wrapper types that seem unnecessary, and wrap the result of a fn with a Result as the compiler told me 2019-07-20 13:22:46 -04:00
Conrad Dean
17112dbc4d fix bug where param types cannot be inferred (because its an empty set of params) 2019-07-19 11:25:00 -04:00
Conrad Dean
b6b316953e seems to resolve some compiler errors 2019-07-17 11:46:47 -04:00
Conrad Dean
3d965fdf6e try fixing build by upgrading rusqlite to 0.19 2019-07-17 10:59:38 -04:00
Grisha Kruglov
e55376e98b
Updates the Sync section of the README 2018-09-10 12:52:41 -07:00
Grisha Kruglov
b22b29679b
Basic sync support (#563) r=nalexander
* Pre: remove remnants of 'open_empty'

* Pre: Cleanup 'datoms' table after a timeline move

Since timeline move operations use a transactor, they generate a
"phantom" 'tx' and a 'txInstant' assertion. It is "phantom" in a sense
that it was never present in the 'transactions' table, and is entirely
synthetic as far as our database is concerned.
It's an implementational artifact, and we were not cleaning it up.

It becomes a problem when we start inserting transactions after a move.
Once the transactor clashes with the phantom 'tx', it will retract the
phantom 'txInstant' value, leaving the transactions log in an incorrect state.

This patch adds a test for this scenario and elects the easy way out: simply
remove the offending 'txInstant' datom.

* Part 1: Sync without support for side-effects

A "side-effect" is defined here as a mutation of a remote state as part
of the sync.

If, during a sync we determine that a remote state needs to be changed, bail out.

This generally supports different variations of "baton-passing" syncing, where clients
will succeed syncing if each change is non-conflicting.

* Part 2: Support basic "side-effects" syncing

This patch introduces a concept of a follow-up sync. If a sync generated
a "merge transaction" (a regular transaction that contains assertions
necessary for local and remote transaction logs to converge), then
this transaction needs to be uploaded in a follow-up sync.

Generated SyncReport indicates if a follow-up sync is required.

Follow-up sync itself is just a regular sync. If remote state did not change,
it will result in a simple RemoteFastForward. Otherwise, we'll continue
merging and requesting a follow-up.

Schema alterations are explicitly not supported.

As local transactions are rebased on top of remote, following changes happen:
- entids are changed into tempids, letting transactor upsert :db/unique values
- entids for retractions are changed into lookup-refs if we're confident they'll succeed
-- otherwise, retractions are dropped on the floor

* Post: use a macro for more readable tests

* Tolstoy README
2018-09-07 19:18:20 -07:00
Nick Alexander
64821079c2
Update README.md to mark Mentat as unmaintained.
See https://mail.mozilla.org/pipermail/firefox-dev/2018-September/006780.html.
2018-09-07 14:37:50 -07:00
Grisha Kruglov
bcec011ca5 Make sure double retractions are not inserted. Fixes #818. (#819) r=nalexander 2018-09-07 13:12:28 -07:00
sc13-bioinf
fba9568d44 Allow plus symbol "+" in symbol names. (#821) r=nalexander 2018-09-05 09:28:32 -07:00
Emily Toop
e3113783ae Fix merge error on iOS automation patch 2018-08-22 16:44:45 +01:00
Emily Toop
cd99774e2c Adding iOS Build and Test to CI (#804)
* Add iOS SDK build and test to rust 1.25.0 version of travis CI build

* Address review comments

* Move iOS testing and document generation into post test jobs
2018-08-22 08:43:17 -07:00
Grisha Kruglov
66cc4e14ad Post: use dirs crate, avoiding compile warning about home_dir 2018-08-20 18:23:46 -07:00
Grisha Kruglov
22b17a6779 Split "mentat transaction" logic away from the main crate
Sync needs to operate over a "mentat transaction", not just a "db transaction".
This shuffle allows internal mentat crates to consume InProgress, which models
the concept of a "mentat transaction".
2018-08-20 18:23:46 -07:00
Grisha Kruglov
6160dd59f7 Pre: use 'db/syncable' feature; derive serialization for PartitionMap 2018-08-20 18:23:46 -07:00
Grisha Kruglov
b8b2aef181 Pre: Split a Db error for clarity
error_chain stack limitations no longer apply, so let's have better errors!
2018-08-20 18:23:46 -07:00
Grisha Kruglov
5bc6d76bb3 Pre: expose read_partition_map from the db crate 2018-08-20 18:23:46 -07:00
Nick Alexander
8c2245ff0b Pre: Add top-level NotYetImplemented error. 2018-08-20 18:23:46 -07:00
Nick Alexander
0b84a0802d Pre: Remove open_empty.
This was a work-around for Tolstoy, which couldn't gracefully handle
syncing a store with a bootstrap transaction.  Tolstoy now handles
that single transaction, so this is no longer necessary.
2018-08-20 18:23:46 -07:00
Grisha Kruglov
8ddbd18f5f
Add travis-ci build status badge to README. 2018-08-20 17:56:49 -07:00
Grisha Kruglov
9e8292e68b Allow 'sqlcipher' feature for all uses of rusqlite
This also patches our CI test script to only run "--feature sqlcipher"
tests on sub-crates which expose this feature (i.e. themselves rely on rusqlite).
2018-08-20 16:55:34 -07:00
Emily Toop
fe1a034822 Fix broken iOS tests 2018-08-20 14:40:39 -07:00
Emily Toop
d61e070e08 Get iOS tests building again. 2018-08-20 14:40:39 -07:00
Grisha Kruglov
db4350aab7 Bump version to 0.11.1 2018-08-09 13:16:05 -07:00
Grisha Kruglov
5976869b0a Post: Make tests pass on Rust 1.25.0
For some reason, the converted doc test fails on Rust 1.25.0, while
working with other Rust versions. For simplicity, just convert it into
a regular test.
2018-08-09 13:16:05 -07:00
Grisha Kruglov
bf8c2c1516 Post: Remove bunch of dependencies from query-pull 2018-08-09 13:16:05 -07:00
Grisha Kruglov
dbb4aab071 Post: Remove mentat_sql dependency from query-projector 2018-08-09 13:16:05 -07:00
Grisha Kruglov
1e488d720b Post: Use a single implementation of bail macro 2018-08-09 13:16:05 -07:00
Grisha Kruglov
e9398dd50d Part 1: Move public errors into public-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
c00e14f5ff Pre: Remove :: dependency from src/errors.rs 2018-08-09 13:16:05 -07:00
Grisha Kruglov
c8e6a511f4 Pre: Move tolstoy/errors into tolstoy-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
9381af4289 Pre: Move core/Attribute* to core-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
68d0e17824 Pre: Move sql/errors into sql_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
05ef149545 Pre: Fold query-translator into query-projector 2018-08-09 13:16:05 -07:00
Grisha Kruglov
6312e89aba Pre: Move query-projectors/errors and aggregates into query-projector-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
ccdd17551a Pre: Move query-algebrizer/error.rs into query-algebrizer-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
9fd198f96a Pre: Move ValueTypeSet into core-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
2ae8594d20 Pre: Do not re-export EdnParseError from core 2018-08-09 13:16:05 -07:00
Grisha Kruglov
07beb68c7a Pre: Remove query/ crate 2018-08-09 13:16:05 -07:00
Grisha Kruglov
11aaa193f5 Pre: Move query-pull/errors into query-pull-traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
cebb85a7fe Pre: Move db/errors.rs into db_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
d0214fad7d Pre: Move core/types.rs into core_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
a57ba5d79f Pre: Move Entid and KnownEntid into core_traits 2018-08-09 13:16:05 -07:00
Grisha Kruglov
f8478835a2 Use crates.io version of the enum-set
rnewman upstreamed his changes in https://github.com/contain-rs/enum-set/pull/20
2018-08-03 15:41:19 -07:00
Nick Alexander
79113498e7 [automation] Split into generic and Mentat-specific Docker images. 2018-08-03 12:53:22 -07:00
Nick Alexander
b5d0e12a24 [automation] Re-add project-specific Mentat Docker image. 2018-08-03 12:53:01 -07:00
Nick Alexander
814ab19ecb [automation] Move project-agnostic Dockerfile into subdirectory.
Docker is directory oriented so we have to play along.
2018-08-03 12:53:01 -07:00
Nick Alexander
0cb8227750 [automation] Be project agnostic; use armv7-linux-androideabi; install Android standalone toolchains.
This is ready for Android Rust-y components: it no longer references Mentat.

The standalone toolchains are installed into
$ANDROID_NDK_TOOLCHAIN_DIR/arch-$ANDROID_NDK_API_VERSION.
2018-08-03 12:53:01 -07:00
Nick Alexander
f747e2e550 [sdks/android] Pre: Disable testCaching for frequent intermittent failures. 2018-08-03 12:53:01 -07:00
Nick Alexander
5b4f50ce1b Fix vcsTag, yet again. 2018-07-31 14:42:08 -07:00
Nick Alexander
3cd61a0c93 Fix vcsTag, again. 2018-07-31 14:05:55 -07:00
Nick Alexander
65e9822ad6 Bump to version 0.11.0. 2018-07-31 09:59:18 -07:00
Nick Alexander
4325d6c0c3 [sdks/android] Move main Mentat Android SDK tests from androidTest to test.
This leverages JNA to test the Android SDK on the host machine using
Robolectric, which is significantly faster and easier to debug than
the equivalent on-device instrumentation tests.

We'll still want instrumentation smoke tests, but they won't need to
cover the entire range of the Android SDK.
2018-07-31 09:54:29 -07:00
Nick Alexander
e06bfd1b7d [sdks/android] Workaround Android Studio JUnit test runner runtime classpath issue. 2018-07-27 10:43:53 -07:00
Nick Alexander
a7d2057bc6 [sdks/android] Post: Address most Android Studio complaints.
The only ones I cared about were unchecked access, but while I'm here,
might as well do most of them.
2018-07-27 10:43:53 -07:00
Nick Alexander
2978ad91c0 [sdks/android] Part 3: Finish conversion to Robolectric. 2018-07-27 10:43:53 -07:00
Nick Alexander
190e05e360 [sdks/android] Part 2: Replace Expectation/wait/notify with CountDownLatch.
Locally, I witnessed very slow tests.  Profiling with Visual VM
revealed a lot of time spent in `wait`.

Digging in, we were trying to be clever, with a `wait(1000)/notify`
mechanism.  However, there were never multiple threads in play, so the
waiter wasn't waiting when `notify` was invoked.  That means we always
timed out.  I think this never worked and using bare `wait()` would
have revealed that.

Anyway, `CountDownLatch` maintains the one bit of state (was I
notified) and generalizes smoothly to when we have threads.
2018-07-27 10:43:53 -07:00
Nick Alexander
d23f2b373a [sdks/android] Include vcsTag when uploading to bintray. 2018-07-27 10:43:52 -07:00
Nick Alexander
6856462f1b [sdks/android] Part 1: Move androidTest to test. 2018-07-27 10:43:52 -07:00
Grisha Kruglov
536d40ad84 Part 4: Add support for moving transactions off of main timeline 2018-07-26 17:14:05 -07:00
Grisha Kruglov
4ec780c87a Part 3: Use a view to derive parts table
Being able to derive partition map from partition definitions and current
state of the world (transactions), segmented by timelines, is useful
because it lets us not worry about keeping materialized partition maps
up-to-date - since there's no need for materialized partition maps at that point.

This comes in very handy when we start moving chunks of transactions off of our mainline.
Alternative to this work would look like materializing partition maps per timeline,
growing support for incremental "backwards update" of the materialized maps, etc.

Our core partitions are defined in 'known_parts' table during bootstrap,
and what used to be 'parts' table is a generated view that operates over
transactions to figure out partition index.

'parts' is defined for the main timeline. Querying parts for other timelines
or for particular timeline+tx combinations will look similar.
2018-07-26 17:14:05 -07:00
Grisha Kruglov
3ca5255cde Part 2: Add basic support for timelines to the transactor
This records transactions onto a default timeline (0).
2018-07-26 17:14:05 -07:00
Grisha Kruglov
0974108a52 Part 1: Allow specifying transactor's commit behaviour
Normally we want to both materialize our changes (into 'datoms')
as well as commit source transactions into 'transactions' table.

However, when moving transactions from timeline to timeline
we don't want to persist artifacts (rewind assertions), just their
materializations.

This patch expands the 'db' interface to allow for this split,
and changes transactor's functions to take a crate-private 'action'
which defines desired behaviour.
2018-07-26 17:14:05 -07:00
Grisha Kruglov
5a29efa336 Part 0: Allow retractions of installed attributes
This is necessary for the timelines work ahead. When schema is being
moved off of a main timeline, we need to be able to retract it cleanly.

Retractions are only processed if the whole defining attribute set
is being retracted at once (:db/ident, :db/valueType, :db/cardinality).
2018-07-26 17:14:05 -07:00
Grisha Kruglov
9a47d8905f Pre: 'Into' implementation chaining TermWithoutTempIds -> TermWithTempIds -> TermWithTempIdsAndLookupRefs 2018-07-26 17:14:05 -07:00
Nick Alexander
e6066769ca Pre: Differentiate bad attribute retractions from unrecognized retractions. 2018-07-26 17:14:05 -07:00
Nick Alexander
fba378ee39 [sdks/android] Build Mentat Android SDK in TaskCluster; publish org.mozilla.mentat to nalexander's personal Bintray repo.
I haven't had this reviewed thoroughly, but it mostly works.
2018-07-26 13:12:20 -07:00
Nick Alexander
faef4e9ee8 Bump to version 0.10.0. 2018-07-26 13:09:18 -07:00
Nick Alexander
a8cc9cb70d [sdks/android] Don't strip Mentat library.
Help folks debugging by including symbols in our native libraries.
Yes, this makes the resulting AAR very large.  The Android ecosystem
seems to be in flux around who is in charge of stripping native
binaries, but for now let's provide symbols and see how consumers
react.
2018-07-26 13:06:03 -07:00
Nick Alexander
76d7df5548 [sdks/android] Package JNA using upstream dependency. 2018-07-26 13:05:31 -07:00
Nick Alexander
7e31ca15bc [sdks] Make store_open{_encrypted} return useful errors.
Because this was formerly a constructor, the pattern needed to change
to a factory function, but that's better than what we had.
2018-07-26 13:01:53 -07:00
Nick Alexander
67a14ca756 [sdks/android] Build Mentat Android SDK in TaskCluster; publish org.mozilla.mentat to nalexander's personal Bintray repo.
The automation parts were cribbed directly from
50add3e176.

The automation permissions were added in
https://bugzilla.mozilla.org/show_bug.cgi?id=1477311.

This uses a very rudimentary Gradle plugin, `rust-android-gradle`,
with custom fixes and extensions.  It works pretty well for what it
is!  See https://github.com/ncalexan/rust-android-gradle.
2018-07-25 20:50:44 -07:00
Nick Alexander
0955c784b7 [sdks/android] Pre: Trim unused Android bits.
We don't use UI libraries, don't require UI resources, and don't
require any permissions.
2018-07-25 20:38:56 -07:00
Nick Alexander
9e6505a930 [sdks/android] Pre: Fix unused warnings. 2018-07-25 20:38:48 -07:00
Victor Porof
89d8ac50a8
Run serde_support tests for the EDN module on CI (#792)
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2018-07-19 19:03:19 +02:00
Victor Porof
2540404b00
Generate rust documentation on CI and publish to gh-pages automatically (#793)
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2018-07-19 18:32:54 +02:00
Nick Alexander
22dad5d6ca [build] Include Gradle wrapper JAR in repository.
Presumably this was an error: `.gitignore` ignores all JAR files, but
this one really needs to be in version control.
2018-07-17 15:30:10 -07:00
Grisha Kruglov
69c9f512a0 Move entid allocation logic into Partition r=nalexander 2018-07-17 06:20:37 -07:00
Grisha Kruglov
6290cc9db2 Enforce partition integrity when setting its index r=nalexander
Timelines work starts to perform modifications on the partitions
that go beyond simple allocations. This change pre-emptively protects
partition integrity by asserting that index modifications are legal.
2018-07-17 06:20:37 -07:00
Nick Alexander
38a92229d7 Pre: Replace PartitionMapping trait with newtype. r=grisha
Generally, I think that Mentat is using too many small traits rather
than wrapping types into newtypes.  Wrapping into newtypes is cheap in
Rust, and it makes it easier to reason about the code.
2018-07-17 06:20:37 -07:00
Grisha Kruglov
675a865896
Extract and improve test macros (#787) r=nalexander
* Part 1: Extract low-level test framework into mentat_db::debug for re-use.

* Part 2: Improve assert_matches!.

This corrects an incorrect pattern: a conversion method taking &self
but returning an owned value should be named like `to_FOO(&self) -> FOO`.  (A
reference-to-reference conversion should be named like `as_FOO(&self)
-> &FOO`.  A consuming conversion should be named like `into_FOO(self)
-> FOO`.)

In addition, this pushes the conversion via `to_edn` into the
`assert_matches!` macro, which lets consumers get a real data
structure (say, `Datoms`) and use it directly before or after
`assert_matches!`.  (Currently, consumers get back `edn::Value`
instances, which aren't nearly as pleasant to use as real data
structures.)

Co-authored-by: Grisha Kruglov <gkruglov@mozilla.com>

* Part 3: Use mentat_db::debug framework in Tolstoy crate.

The advantage of this approach is that compiling Tolstoy (or anything
that's not db, really) can be quite a bit faster than compiling db.
2018-07-16 13:58:34 -07:00
Nick Alexander
e9cddd63e4 [tx] Don't treat :db/doc as defining a schema attribute. (#784) r=grisha 2018-07-16 12:20:34 -07:00
Nick Alexander
9291b2a0b0 [tx] Don't treat :db/doc as defining a schema attribute. (#784) 2018-07-13 14:29:29 -07:00
Grisha Kruglov
bff24c60b7
Add a top-level "syncable" feature. (#782) r=ncalexan
* Add a top-level "syncable" feature.

Tested with:

cargo test --all
cargo test --all --no-default-features
cargo build --manifest-path tools/cli/Cargo.toml --no-default-features
cargo run --manifest-path tools/cli/Cargo.toml --no-default-features debugcli

Co-authored-by: Nick Alexander <nalexander@mozilla.com>

* Add 'syncable' feature to 'db' crate to conditionally derive serialization for Partition*

This is leading up to syncing with partition support.
2018-07-11 16:26:06 -07:00
Nick Alexander
61e6b85e6a Make Partition include end of range and allow_excision flag. r=grisha,nalexander 2018-07-06 16:12:28 -07:00
Nick Alexander
82610f17f8 Part 2: Make partition include an allow_excision flag.
This is leading up to the implementation of
https://github.com/mozilla/mentat/issues/21.
2018-07-06 16:11:42 -07:00
Grisha Kruglov
c0ddc2ca70 Part 1: Make Partition include explicit end range bound.
It's helpful to have the full range when syncing.
2018-07-06 15:23:06 -07:00
Nick Alexander
7d2fe8c625 Remove low-hanging dependency fruit. (#773) r=nalexander 2018-07-06 14:58:06 -07:00
Thom Chiovoloni
0549bbd604 Remove needless num dependency from mentat_core. 2018-07-06 14:56:42 -07:00
Thom Chiovoloni
dcc0770ca4 Remove needless num dependency from mentat_db and optimize remove_every.
This implementation of `remove_every` is O(n) and not O(n^2) like it was before.
2018-07-06 14:56:33 -07:00
Thom Chiovoloni
ad2b646700 Remove regex dependency from query_sql. Fixes #771. 2018-07-06 14:56:11 -07:00
Nick Alexander
f65512158b Make kw!(foo.bar/bar.baz) work. 2018-07-06 14:19:50 -07:00
Nick Alexander
07c5d733d6 Bump to version 0.8.0.
We've made many breaking changes, especially to error handling, so
it's time to bump versions.
2018-07-05 16:48:27 -07:00
Nick Alexander
46f7db36c9 Small improvements accumulated while building the logins API on top of Mentat. (#779) r=grisha
These build on #778, and implement a variety of small fixes (related
parts are labelled as such), and one non-trivial part -- matching
tuple results with the `BindingTuple` trait. In practice, this is very
helpful, and greatly streamlined the logins API.
2018-07-05 16:46:02 -07:00
Nick Alexander
2cb7d441dc Part 2: Make it easier to match tuple results.
Right now, we write code like
```rust
match q_once(q, inputs)?.into_tuple()? {
    Some(vs) => match (vs.len(), vs.get(0), vs.get(1)) {
        (2, &Some(Binding::Scalar(TypedValue::Long(a))), &Some(Binding::Scalar(TypedValue::Instant(ref b)))) => Some((a, b.clone())),
        _ => panic!(),
    },
    None => None,
}
```
to length-check tuples coming out of the database.  It can also lead
to a lot of cloning because references are the easiest thing to hand.

This commit allows to write code like
```rust
match q_once(q, inputs)?.into_tuple()? {
    Some((Binding::Scalar(TypedValue::Long(a)), Binding::Scalar(TypedValue::Instant(b)))) => Some((a, b)),
    Some(_) => panic!(),
    None => None,
}
```
which is generally much easier to reason about.
2018-07-05 16:45:42 -07:00
Nick Alexander
e362ca6213 Part 1: Allow to clone useful query structures. 2018-07-05 16:45:42 -07:00
Nick Alexander
2ab481f83e Part 2: Expose time related things at top-level.
Perhaps we actually want to subdivide the top-level namespace so that
there is a `mentat::time` module, but I'd prefer to make part of the
process of fixing the public API as we get ready to christen version
1.0.
2018-07-05 16:45:42 -07:00
Nick Alexander
1c0602fa00 Part 1: Add {From,To}Millis.
I think this is just oversight.  Generally, we should anticipate what
our consumers need to do to interact with Mentat, and producing milli-
and micro-second timestamps is part of that need.
2018-07-05 16:45:42 -07:00
Nick Alexander
3744982cd9 Add last_tx_id. 2018-07-05 16:45:42 -07:00
Nick Alexander
b9f3681728 Part 2: Allow to Deref StructuredMap to the underlying IndexMap.
Again, this is a fundamental Rust pattern for newtypes.  It's awfully
hard to actually use `StructuredMap` without it!
2018-07-05 16:45:42 -07:00
Nick Alexander
d49f702512 Part 1: Expand Binding::val() into Binding::{into_*,as_*}.
This is simply for completeness: we should provide fundamental
conversion patterns even when they are mostly unused in our code base.
2018-07-05 16:45:42 -07:00
Nick Alexander
99deb87b9f Build Entity instances, not Term* instances. Fixes #674. (#778) r=grisha 2018-07-05 16:42:02 -07:00
Nick Alexander
eb1df31ac4 Part 7: Improve TermBuilder interface; expose lookup refs and tx functions.
These are functions on `TermBuilder` itself to prevent mixing mutable
and immutable references in the most natural style.  That is,
```
builder.add(e, a, builder.lookup_ref(...))
```
fails because `add` borrows `builder` mutably and `lookup_ref` borrows
`builder` immutably.  There's nothing here that requires a specific
builder (since we're not interning lookup refs on the builder, like we
are tempids) so we don't need an instance.
2018-07-05 16:33:51 -07:00
Nick Alexander
06056a8468 Part 6: Lift TxReport to core crate.
The `core` create didn't exist when the `db` was started, but this
type is clearly part of the public interface of Mentat.
2018-07-05 16:33:51 -07:00
Nick Alexander
1cb1847aa6 Part 5: Make existing TermBuilder actually build Entity instances.
There are a few tricky details to call out here.  The first is the
`TransactableValueMarker` trait.  This is strictly a marker (like
`Sized`, for example) to give some control over what types can be used
as value types in `Entity` instances.  This expression is needed due
to the network of `Into` and `From` relations between the parts of
valid `Entity` instances.  This allows to drop the `IntoThing`
work-around trait and use the established patterns.  (Observe that
`KnownEntid` makes this a little harder, due to the cross-crate
consistency restrictions.)

The second is that we can get rid `{add,retract}_kw`, since the
network of relations expresses the coercions directly.

The third is that this commit doesn't change the name `TermBuilder`,
even though it is now building `Entity` instances.  This is because
there's _already_ an `EntityBuilder` which fixes the `EntityPlace`.
It's not clear whether the existing entity building interface should
be removed or whether both should be renamed.  That can be follow-up.
2018-07-05 16:33:51 -07:00
Nick Alexander
76507623ac Part 4: Prepare EDN Entity type for interning tempids during parsing.
This is all part of moving the entity builder away from building term
instances and toward building entity instances.  One of the nice
things that the existing term interface does is allow consumers to use
lightweight reference counted tempid handles; I don't want to lose
that, so we'll build it into the entity data structures directly.
2018-07-05 11:17:20 -07:00
Nick Alexander
106d6fae11 Part 3: Implement Deref and DerefMut for InternSet.
This pattern is generally how newtype wrappers (like `struct
Foo(Bar)`) are implemented in Rust.
2018-07-05 11:16:55 -07:00
Nick Alexander
02a163a10f Part 2: Use ValueRc in InternSet.
We haven't observed performance issues using `Arc` instead of `Rc`,
and we want to be able to include things that are interned (including,
soon, `TempId` instances) in errors coming out of the
transactor.  (And `Rc` isn't `Sync`, so it can't be included in errors
directly.)
2018-07-05 11:16:53 -07:00
Nick Alexander
87f850a44e Part 1: Move intern_set into edn crate.
It's not great to keep lifting functionality higher and higher up the
crate hierarchy, but we really do want to intern while we parse.
Eventually, I expect that we will split the `edn` crate into `types`
and `parsing`, and the `types` crate can depend on a more efficient
interning dependency.
2018-07-05 11:16:48 -07:00
Nick Alexander
d82c7f8ef2 Cull unused mentat_parser_utils crate.
With the transition toward parsing with `rust-peg` and away from
`combine`, we're not using some of the many helpers we built to
support our unusual `combine` usage.  They can just go!
2018-06-30 16:21:50 -07:00
Nick Alexander
8725bad18c Pre: Fix error printing rusqlite::Error. 2018-06-30 14:58:23 -07:00
Emily Toop
da599c3a78 Fix broken documentation links. (#775) (#767) r=nalexander
* Fix broken API doc links

Create symlink for latest to point to v0.7.
Group APIs by top version number rather than individual

* Update swift and android version numbers to match Mentats

* Update documentation

* Update top level .gitignore to ignore docs site & metatdata

* Add README to help with building documentation site

* Address review comments @ncalexan
2018-06-29 10:28:44 -07:00
Grisha Kruglov
8af5288a60 Use TolstoyError for tolstoy's Results; wrap tolstoy's dependency errors r=nalexander
This is inline with the rest of mentat, and helps with upcoming tolstoy work.
2018-06-29 00:47:19 -04:00
Nick Alexander
5fe4f12d32 Use concrete Mentat error types rather than failure::Error. (#769) r=grisha
In the language of
868273409c/book/src/error-errorkind.md
Mentat is a mid-level API, not an application, and therefore we should
prefer our own error types.  Writing an initial consumer of Mentat (a
Rust logins API targeting Mozilla Lockbox), I have found this to be
true: my consumer wants to consume concrete Mentat error types.

This doesn't go "all the way" and convert all sub-crates to the
Error/ErrorKind, but it does go part of the way.
2018-06-27 15:30:55 -07:00
Nick Alexander
ae427849d5 Expose sub-crate *Error types at top-level.
We're not exposing a uniform API with `mentat::Result` yet, meaning
that early consumers (e.g., the logins work for Mozilla Lockbox) need
to wrap errors from all over the Mentat crate hierarchy.
2018-06-27 15:05:43 -07:00
Nick Alexander
d31ec28aa8 Patch it all together: use MentatError at top-level.
I elected to keep Tolstoy using `failure::Error`, because Tolstoy
looks rather more like a high-level application (and will continue to
do so for a while) than a production-ready mid- or low-level API.
2018-06-27 15:05:43 -07:00
Nick Alexander
ac1b0b15fe Convert query-translator/ to query-projector's ProjectorError. 2018-06-27 15:05:43 -07:00
Nick Alexander
b2249f189d Convert query-projector/ to ProjectorError. 2018-06-27 15:05:43 -07:00
Nick Alexander
af005a7669 Convert query-algebrizer/ to AlgebrizerError. 2018-06-27 15:05:43 -07:00
Nick Alexander
d6569a6a22 Convert query-pull/ to PullError. 2018-06-27 15:05:43 -07:00
Nick Alexander
0e4991fa26 Make db/ use DbErrorKind. 2018-06-27 15:05:43 -07:00
Thom
72a9b302f9
Rename or delete things so that there is only one type named Entid (#768)
* Delete the (apparently unused) EntId

* Rename edn's Entid to EntidOrIdent to avoid confusion with the Entid that's actually an i64

* Fix travis beta bustage (This is actually unrelated to entids, but is a trivial fix nonetheless)
2018-06-26 16:34:18 -07:00
Thom
f335253d4c
Fix known leaks and memory safety issues in both swift and android SDKs (#745) 2018-06-25 12:10:11 -07:00
Emily Toop
605c3d938c
Remove duplicated header (#764) 2018-06-25 12:11:58 +01:00
Emily Toop
7232e6ef33
Setting baseurl (#763) 2018-06-25 12:01:16 +01:00
Emily Toop
b323448630
Attempting to get minima theme building on github (#762) 2018-06-25 11:56:59 +01:00
Emily Toop
08b83abe21 Set theme jekyll-theme-tactile 2018-06-25 11:21:42 +01:00
Emily Toop
c5180656cc
Mentat documentation website using Jekyll (#754)
Steps to building docs locally:

    1. Install Jekyll
    2. cd docs
    3. bundle exec jekyll serve --incremental
    4. open local docs site at http://127.0.0.1:4000/


* basic Jekyll site

* Add docs to documentation site

* Update javadoc to allow for error free builds

* Remove docs for rust dependencies

* Better display examples, about and contributing documentation for Mentat

* Version docs
2018-06-25 11:20:36 +01:00
Nick Alexander
7f76d53612 Load and save CLI history. (#758, #760) r=grisha 2018-06-22 15:39:46 -07:00
Nick Alexander
4ea9c78c50 [cli] Part 3: {load,save}_history as appropriate.
It's possible that we should be saving more aggressively -- perhaps
after each entered command -- but we can add that later.
2018-06-22 15:39:29 -07:00
Nick Alexander
c41d728d1d [cli] Part 2: Don't use exit() to terminate the CLI.
It's not possible to do meaningful clean-up (such as saving history)
if we use exit() to quit.  Instead, each handled command returns a
boolean requesting exit.  I elected not to allow ".exit" when
processing commands from the command line; it might be useful to
handle accept that.  In general, though, REPLs that accept "-c
'commands'" on the command line exit after processing those commands,
so I'd rather think more deeply about that model than build in ".exit"
with our existing system.
2018-06-22 15:36:09 -07:00
Nick Alexander
c19337c8bf [cli] Part 1: Bump linefeed; use linefeed::Interface; add "--no-tty" argument.
I don't really understand why we were using `linefeed::Reader`
directly, but reading is not the full set of linefeed features we want
to access.  I think the `linefeed::Interface` should be owned by the
`Repl`, not the `InputReader`, but it's a little awkward to share
access with that configuration, so I'm not going to lift the ownership
until I have a reason to.

I think the "--no-tty" argument might be useful for running inside
Emacs.  Along the way, I made read_stdin() strip the trailing newline,
which agrees with InputReader::read_line().
2018-06-22 15:36:09 -07:00
Nick Alexander
8e2d795778
[cli] Handle line comments in EDN input. (#759) (#761) r=grisha
What was happening is that ["[;", "]"] would get glued to "[; ]",
which of course can never complete.

It would be good to add tests of this, but the existing multi-line
`InputReader` makes that challenging and I don't want to invest the
time to improve it: I expect it to be overhauled as part of a
transition away from parsing with `combine` and toward parsing with
`rust-peg`.
2018-06-22 14:34:16 -07:00
Nick Alexander
60a57ea493 Use failure instead of error_chain. (#586) r=nalexander 2018-06-20 14:56:08 -07:00
Nick Alexander
3760f84da8 Post: Fix comment referring to error-chain. 2018-06-20 14:42:39 -07:00
Grisha Kruglov
4e46adeba1 Convert tolstoy/ to failure. 2018-06-20 14:42:36 -07:00
Grisha Kruglov
31de5be64f Convert db/ to failure. 2018-06-20 14:42:34 -07:00
Grisha Kruglov
0adfa6aae6 Convert tools/cli to failure. 2018-06-20 14:42:30 -07:00
Grisha Kruglov
800f404a23 Convert ffi/ to failure.
This is neat, because currently at the FFI boundary we're primarily concerned
with verbalizing our errors. It doesn't matter what 'error' that's wrapped by
Result is then, as long as it can be displayed.

Once we're past the prototyping stage, it might be a good idea to formalize this.
2018-06-20 14:42:21 -07:00
Grisha Kruglov
4e01929334 Convert src/ to failure. 2018-06-20 14:42:18 -07:00
Grisha Kruglov
836fdb3a35 Convert query_translator/ to failure. 2018-06-20 14:42:14 -07:00
Grisha Kruglov
c075434f84 Convert query_projector/ to failure. 2018-06-20 14:42:10 -07:00
Grisha Kruglov
061967f268 Convert query-pull/ to failure. 2018-06-20 14:42:05 -07:00
Grisha Kruglov
326fe881a0 Convert query-algebrizer/ to failure. 2018-06-20 14:41:59 -07:00
Grisha Kruglov
ce3ce1ccbf Convert sql/ and query-sql/ to failure.
sql/query-sql
2018-06-20 14:41:52 -07:00
Mario Wenzel
fb7d2357de Don't try to match "key" when not using sqlcipher. (#752, #753) r=nalexander
This causes a runtime error, since `opt_str("key")` isn't recognized.
2018-06-20 13:26:36 -07:00
Emily Toop
aae50f40ac Set theme jekyll-theme-tactile 2018-06-20 15:59:50 +01:00
Emily Toop
4282b2d332 Set theme jekyll-theme-cayman 2018-06-20 15:53:34 +01:00
Nick Alexander
91fa34e462
[website] Create placeholder index.html. 2018-06-19 11:39:08 -07:00
Thom
88c6a4b05c
Fix typo nit accidentally missed in #743 (#744) 2018-06-14 13:30:03 -07:00
Thom
87fb505c56
Make travis test sqlcipher by running the tests on macos. Fixes #738 (#743) 2018-06-14 13:23:17 -07:00
Thom
54d592df29
Merge pull request #742 from thomcc/fix-bustage 2018-06-14 07:58:09 -07:00
Thom Chiovoloni
99a73ccb03 Avoid using 1.26.0-only features when using sqlcipher, and move the sqlcipher Store support to the correct file 2018-06-13 22:48:28 -07:00
Emily Toop
8e918949fb Separate Store from Conn.
This is a Pre: part extracted from #660.
2018-06-13 15:29:11 -07:00
Nick Alexander
f041dfe509 Bustage fix: Build against Rust 1.25. 2018-06-13 15:28:44 -07:00
Nick Alexander
8cc0e5a64e Make Travis test against multiple Rust versions, including Rust 1.25. 2018-06-13 15:28:44 -07:00
Thom
6a1a265894
Add support for using sqlcipher (#737). Fixes #118 2018-06-13 08:49:40 -07:00
Emily Toop
c0d4568970 Support consumption by Carthage; bump iOS minimum version to 11. r=nalexander
Principally, this adds producing libmentat_ffi.a for consumption by Carthage.

To achieve this, we disable bitcode and bump the miniumum iOS version
to 11.  In addition, we make things open and public so that consumers
can, well, consume.
2018-06-08 13:53:36 -07:00
Nick Alexander
3d5ae797b2 Parse queries with rust-peg. r=rnewman,grisha (#728) 2018-06-04 15:46:13 -07:00
Nick Alexander
cfed968514 Review comments. 2018-06-04 15:21:27 -07:00
Nick Alexander
e68cc4016c Part 7: Remove tx entirely.
This was left over from #681.
2018-06-04 15:04:39 -07:00
Nick Alexander
d4166cc67c Part 6: Remove query-parser entirely. 2018-06-04 15:04:39 -07:00
Nick Alexander
47441f56dc Part 5: Push FindQuery into query-algebrizer; structure errors.
This is a big deck-chair re-arrangement.  This puts FindQuery into
query-algebrizer and puts the validation from ParsedFindQuery ->
FindQuery their as well.

Some tests were re-homed for this.

In addition, the little-used maplit crate dependency was replaced with
inline expressions.
2018-06-04 15:04:39 -07:00
Nick Alexander
09f1d633b5 Part 4: Parse queries with rust-peg.
There's an unfortunate conflation here between implementing the query
parser in `rust-peg` and moving some validation that now happens at
parse time to happen later.  The result is that we introduce
`ParsedFindQuery` as a less-processed `FindQuery`, and that we only
use string errors (which is all `rust-peg` supports) instead of the
structured errors in query-parser's errors module.  The next commit
will address this, on the road to removing the `query-parser` module
entirely.
2018-06-04 15:04:39 -07:00
Nick Alexander
a8073056f2 Part 3: Move query into edn.
It's unfortunate to squash two crates together like this, but it's the
best option.
2018-06-04 15:04:37 -07:00
Nick Alexander
a4a8892309 Part 3a: Move file to preserve blame. 2018-06-04 14:56:56 -07:00
Nick Alexander
1d8d94f887 Part 2: Turn (type-function ?var) into (type ?var type-keyword).
This is more general (the parser doesn't encode the set of known
types), and avoids a dependency on `ValueType`.
2018-06-04 14:52:51 -07:00
Nick Alexander
ad9a1394a3 Part 1: Push ValueRc and friends into edn crate.
This is a pre-requisite for moving the existing `combine`-based parser
to use `rust-peg` -- part of the push to use `rust-peg` for all parsing
started in https://github.com/mozilla/mentat/pull/681.  We need the
types for the parsed structure "very early", and the `edn` crate is
the earliest such crate.

This is an unfortunate destruction of boundaries between parts of the
system, but it's the best way we have to achieve this right now.
2018-06-04 14:52:51 -07:00
Nick Alexander
f1fc9f1846 Part 0: Extract query-parser errors. 2018-06-04 14:52:51 -07:00
Nick Alexander
3cc8b4fd24 Pre: Prefer [(pred ...)] to [[pred ...]] syntax.
This is a style choice.  We supported both, perhaps for Datomic
compliance, but it's not the standard we use in our code base.  In
addition, it doesn't read like lisp (which is what EDN is copying),
since [] is not function application in most lisps.

It's also a convenience: I don't want to parse brackets that have to
agree with `rust-peg`.  It's not hard but it's also not worth doing.
2018-06-04 14:52:51 -07:00
Nick Alexander
47a0f40cce Pre: Fix warnings. 2018-06-04 14:52:51 -07:00
Nick Alexander
729fe59578 [edn] Pre: Rename keyword to namespaced_keyword.
The `Keyword` type evolved to become more general: we now use the one
type for both :regular and :name/spaced keywords.  This changes
reflects the new generality.
2018-06-04 14:52:51 -07:00
Nick Alexander
d8d18a1731
[query] Handle SQL NULL for aggregates over 0 rows. (#684) (#688) r=rnewman
This uses a `SELECT *` from an inner subselect to filter potentially `NULL` aggregates.

The alternative is to handle `NULL` values throughout the projector, which is simple but loses a valuable invariant: Mentat SQL queries produce values that are not `NULL`.
2018-06-01 14:17:31 -07:00
Grisha Kruglov
2a025916fe
Android SDK basic sample project and symlinked SDK Mentat binaries (#729) r=nalexander
* Add an IntelliJ section to gitignore
* Add Android SDK sample project which exercises mentat SDK
* Symlink libmentat_ffi.so in Android SDK to the generated --release files
* README files for Android SDK and mentat_ffi
2018-06-01 12:44:31 -07:00
Grisha Kruglov
93b7d25446
Android build script which supports target specification (#727) r=self 2018-05-31 12:25:24 -07:00
Grisha Kruglov
250e35b726
Gradle support for publishing to bintray (#720) r=ncalexan
* Rename SDK package name from com.mozilla.mentat to org.mozilla.mentat
* Gradle configuration for publishing to a bintray repository
2018-05-30 13:38:45 -07:00
Grisha Kruglov
b0421c61b4
Min SDK 16, bump dependency versions, update gradle & wrapper, fix linter error (#717) r= fluffyemily
Bump versions, update gradle wrapper, fix linter error (log tag too long)
2018-05-29 10:16:32 -07:00
Richard Newman
01db9232b4
Include namespace-separating solidus in NamespaceableName; improve type handling around ground (#713) r=nalexander
* Include the namespace-separating solidus in NamespaceableName.
* Use type annotations when deciding how to process ambiguous ground input.
* Include simple patterns in the type extraction phase of pattern application. (#705)
* Review comment.
* Add a test.
2018-05-29 16:45:53 +02:00
Chris Foster
e4447927c7 Update README.md
Grammar -- the subjunctive is appropriate here.
2018-05-24 10:00:01 -07:00
Emily Toop
6121da3592 Address review comments @nalexander 2018-05-15 15:39:15 +01:00
Emily Toop
2c0f755632 Address review comments @nalexander 2018-05-15 15:39:15 +01:00
Emily Toop
35467c1b24 Wrap caching FFI functions in Android Java library.
`CacheDirection` enum is used only on the Android side to provide a usable interface. FFI calls are more explicit.

Tests ensure that a cached query is faster than the uncached one.
2018-05-15 15:39:15 +01:00
Emily Toop
8add073001 Wrap caching FFI functions in Swift library.
`CacheDirection` enum is used only on the Swift side to provide a usable interface. FFI calls are more explicit.

Tests ensure that a cached query is faster than the uncached one.
2018-05-15 15:39:15 +01:00
Emily Toop
b4b558e196 Expose cache over the FFI.
This exposes an FFI function for each direction of caching, `Forward`, `Reverse` and `Both`. This is to make is as clear as possible to consumers which direction they are caching their attributes in. The original implementation exposed the `CacheDirection` enum over FFI and it made mistakes very easy to make. This is more explicit and therefore less prone to error.
2018-05-15 15:39:15 +01:00
Emily Toop
e0cd9b6b20 Implement InProgress transactions and InProgress and Entity builders on Android.
Rename some of the functions in TypedValue, TupleResult and QueryBuilder to make them more Javay and less Swifty
2018-05-15 15:39:15 +01:00
Emily Toop
38c1a93712 Implement InProgress transactions and InProgress and Entity builders on iOS 2018-05-15 15:39:15 +01:00
Emily Toop
ed5427253b Expose InProgress, InProgressBuilder and EntityBuilder over the FFI.
There are two ways to create each builder, directly from a `Store` or from an `InProgress`. Creating from `Store` will perform two actions, creating a new `InProgress` and then returning a builder from that `InProgress`. In the case of `store_entity_builder_with_entid` and `store_entity_builder_from_tempid`, the function goes a step further and calls `describe` or `describe_tempid` from the created `InProgressBuilder` and returning the `EntityBuilder` that results. These two functions are replicated on `InProgress`. This has been done to reduce the overhead of objects being passed over the FFI boundary.

The decision to do this enables us to go from something like

```
in_progress  = store_begin_transaction(store);
builder = in_progress_builder(in_progress);
entity_builder = in_progress_builder_describe(builder, entid);
```
to
```
entity_builder = store_entity_builder_from_entid(store);
```

There is an `add_*` and `retract_*` function specified for each `TypedValue` type for both `InProgressBuilder` and `EntityBuilder`.

To enable `transact` on `EntityBuilder` and `InProgressBuilder`, a new `repr(C)` struct has been created that contains a pointer to an `InProgress` and a pointer to a `Result<TxReport>` to allow passing the tuple result returned from `transact` on those types over the FFI.

Commit is possible from both builders and `InProgress`.
2018-05-15 15:39:15 +01:00
Richard Newman
b2e98f44f6
Generalize Entity by value type. (#701) (#691) r=rnewman
* Part 3: Parameterize Entity by value type.

This isn't quite right, because after parsing, we shouldn't care
about` `edn::ValueAndSpan`, we should care only about edn::Value.
However, I think we can drop `ValueAndSpan` entirely if we just use
`rust-peg` (and its simpler error messages) rather than a mix of
`rust-peg` and `combine`.

In any case, this paves the way to transacting `Entity<TypedValue>`,
which is a nice step towards building general entities.

* Part 1: Add AttributePlace.

* Part 2: Name other places EntityPlace and ValuePlace.

Now we're consistent and closer to self-documenting.  Both matter more
as we expose `Entity` as the thing to build for programmatic usage.

* Part 4: Allow Ident and TempId in ValuePlace.

The parser will never produce these, since determining whether an
integer/keyword or string is an ident or a tempid, respectively, in
the value place requires the schema.

But a builder that produces `Entity` instances directly will want to
produce these.
2018-05-15 00:43:07 -07:00
Nick Alexander
46c2a0801f Add type checking and constraint checking to the transactor. (#663, #532, #679)
This should address #663, by re-inserting type checking in the
transactor stack after the entry point used by the term builder.

Before this commit, we were using an SQLite UNIQUE index to assert
that no `[e a]` pair, with `a` a cardinality one attribute, was
asserted more than once.  However, that's not in line with Datomic,
which treats transaction inputs as a set and allows a single datom
like `[e a v]` to appear multiple times.  It's both awkward and not
particularly efficient to look for _distinct_ repetitions in SQL, so
we accept some runtime cost in order to check for repetitions in the
transactor.  This will allow us to address #532, which is really about
whether we treat inputs as sets.  A side benefit is that we can
provide more helpful error messages when the transactor does detect
that the input truly violates the cardinality constraints of the
schema.

This commit builds a trie while error checking and collecting final
terms, which should be fairly efficient.  It also allows a simpler
expression of input-provided :db/txInstant datoms, which in turn
uncovered a small issue with the transaction watcher, where-by the
watcher would not see non-input-provided :db/txInstant datoms.

This transition to Datomic-like input-as-set semantics allows us to
address #532.  Previously, two tempids that upserted to the same entid
would produce duplicate datoms, and that would have been rejected by
the transactor -- correctly, since we did not allow duplicate datoms
under the input-as-list semantics.  With input-as-set semantics,
duplicate datoms are allowed; and that means that we must allow
tempids to be equivalent, i.e., to resolve to the same tempid.

To achieve this, we:
- index the set of tempids
- identify tempid indices that share an upsert
- map tempids to a dense set of contiguous integer labels

We use the well-known union-find algorithm, as implemented by
petgraph, to efficiently manage the set of equivalent tempids.

Along the way, I've fixed and added tests for two small errors in the
transactor.  First, don't drop datoms resolved by upsert (#679).
Second, ensure that complex upserts are allocated.

I don't know quite what happened here.  The Clojure implementation
correctly kept complex upserts that hadn't resolved as complex
upserts (see
9a9dfb502a/src/common/datomish/transact.cljc (L436))
and then allocated complex upserts if they didn't resolve (see
9a9dfb502a/src/common/datomish/transact.cljc (L509)).

Based on the code comments, I think the Rust implementation must have
incorrectly tried to optimize by handling all complex upserts in at
most a single generation of evolution, and that's just not correct.
We're effectively implementing a topological sort, using very specific
domain knowledge, and its not true that a node in a topological sort
can be considered only once!
2018-05-14 15:22:45 -07:00
Nick Alexander
e5e37178af Pre: Remove ancient Clojure code comments. 2018-05-14 15:22:45 -07:00
Richard Newman
3cba87c74b Allow pull aliases to be non-namespaced. (#694) r=nalexander 2018-05-14 10:45:48 -07:00
Emily Toop
013629dec6
iOS and Android (Java) sdk framework (#643)
Documents the FFI layer for Mentat, and provides transaction functionality via an EDN string. Creates two native libraries for iOS (Swift) and Android (Java) and fully tests the FFI for both platforms.

Closes #619 #614 #611
2018-05-14 16:20:36 +01:00
Richard Newman
60cb5d2432
Pull improvements (#682) r=nalexander
* Parse and handle aliased pull attributes.
* Allow :db/id to be mentioned as a pull attribute.
* Clean up comment.
* Remove unused function.
2018-05-13 14:15:36 -07:00
Nick Alexander
4fde4fe0a6 Bustage fixes: compile on stable; avoid unused variable warning. 2018-05-11 10:22:57 -07:00
Richard Newman
3dc68bcd38 Combine NamespacedKeyword and Keyword. (#689) r=nalexander
* Make properties on NamespacedKeyword/NamespacedSymbol private

* Use only a single String for NamespacedKeyword/NamespacedSymbol

* Review comments.

* Remove unsafe code in namespaced_name.

Benchmarking shows approximately zero change.

* Allow the types of ns and name to differ when constructing a NamespacedName.

* Make symbol namespaces optional.

* Normalize names of keyword/symbol constructors.

This will make the subsequent refactor much less painful.

* Use expect not unwrap.

* Merge Keyword and NamespacedKeyword.
2018-05-11 09:52:17 -07:00
Nick Alexander
c8f74fa41b [edn] Round-trip instants. (#686) (#687) r=rnewman
First, the parser had a small grouping bug where-by it wouldn't parse
Z as timezone correctly.  Second, we weren't printing instants in the format
that we parse.
2018-05-11 02:11:04 -07:00
Thom
37a6f7be28 Use Cell instead of AtomicUsize in RcCounter. (#646) r=rnewman 2018-05-11 02:03:09 -07:00
Nick Alexander
9a4bd0de4f Use rust-peg for tx parsing. r=rnewman 2018-05-10 10:32:27 -07:00
Nick Alexander
7a8c9d90c2 Post: Remove tx-parser crate entirely. 2018-05-10 10:24:05 -07:00
Nick Alexander
cbffe5e545 Use rust-peg for tx parsing.
There are few reasons to do this:

- it's difficult to add symbol interning to combine-based parsers like
  tx-parser -- literally every type changes to reflect the interner,
  and that means every convenience macro we've built needs to chagne.
  It's trivial to add interning to rust-peg-based parsers.

- combine has rolled forward to 3.2, and I spent a similar amount of
  time investigating how to upgrade tx-parser (to take advantage of
  the new parser! macros in combine that I think are necessary for
  adapting to changing types) as I did just converting to rust-peg.

- it's easy to improve the error messages in rust-peg, where-as I have
  tried twice to improve the nested error messages in combine and am
  stumped.

- it's roughly 4x faster to parse strings directly as opposed to
  edn::ValueAndSpan, and it'll be even better when we intern directly.
2018-05-10 10:24:05 -07:00
Nick Alexander
e437944d94 Pre: Don't use tx-parser for destructuring map notation.
This was always a choice, but we've outgrown it: now we want to accept
value types that don't come from EDN and/or tx-parser.
2018-05-10 10:19:54 -07:00
Nick Alexander
4c4af46315 Add TransactableValue abstracting value places that can be transacted.
This is a stepping stone to transacting entities that are not based on
`edn::ValueAndSpan`.  We need to turn some value places (general) into
entity places (restricted), and those restrictions are captured in
tx-parser right now.  But for `TypedValue` value places, those
restrictions are encoded in the type itself.  This lays the track to
accept other value types in value places, which is good for
programmatic builder interfaces.
2018-05-10 10:19:54 -07:00
Emily Toop
e1e7cbaa44
Closes #634 - Fix variables in predicates (#635) r=rnewman
We were forgetting to check for bound variables when resolving types other than ref types during inequality handling. This patch adds in the binding checks and `bails` if the bound variable is of the wrong type. #634
2018-05-09 16:24:12 +01:00
Richard Newman
e21156a754
Implement simple pull expressions (#638) r=nalexander
* Refactor AttributeCache populator code for use from pull.

* Pre: add to_value_rc to Cloned.

* Pre: add From<StructuredMap> for Binding.

* Pre: clarify Store::open_empty.

* Pre: StructuredMap cleanup.

* Pre: clean up a doc test.

* Split projector crate. Pass schema to projector.

* CLI support for printing bindings.

* Add and use ConjoiningClauses::derive_types_from_find_spec.

* Define pull types.

* Implement pull on top of the attribute cache layer.

* Add pull support to the projector.

* Parse pull expressions.

* Add simple pull support to connection objects.

* Tests for pull.

* Compile with Rust 1.25.

The only choice involved in this commit is that of replacing the
anonymous lifetime '_ with a named lifetime for the cache; since we're
accepting a Known, which includes the cache in question, I think it's
clear that we expect the function to apply to any given cache
lifetime.

* Review comments.

* Bail on unnamed attribute.

* Make assert_parse_failure_contains safe to use.

* Rework query parser to report better errors for pull.

* Test for mixed wildcard and simple attribute.
2018-05-04 12:56:00 -07:00
Nick Alexander
90465ae74a Flip ValueRc to Arc in order to allow TypedValue in errors. (#677) (#678) r=rnewman
@mmacedoeu did a good deal of work to show that Arc instead of Rc
wasn't too difficult in #656, and @rnewman pushed the refactoring
across the line in #659. However, we didn't flip the switch at that
time. For #673, we'd like to include TypedValue instances in errors,
and with error-chain (and failure) error types need to be 'Sync +
'Send, so we need Arc.

This builds on #659 and should also finish #656.
2018-05-03 16:46:49 -07:00
Nick Alexander
1b66818ac9 Post: Fix CLI bustage. 2018-05-01 16:10:06 -07:00
Nick Alexander
9513012aa5 [tx] Fail transactions where complex upserts resolve to multiple entids. (#670) r=rnewman 2018-05-01 15:35:44 -07:00
Nick Alexander
2b82ffb2e5 [tx] Fail transactions where complex upserts resolve to multiple entids. (#670)
This innocuous looking change (upserts_ev -> upserts_e -> resolved in
all situations, rather than upserts_ev -> resolved in some situations)
is a significant change in semantics and assumptions in the
transactor.  Witness the large comment being removed about the same
tempid resolving in different generations!

To support this change, we provide more holistic errors for
conflicting upserts, which entails collecting some (relatively
expensive) diagnostic data.

I left in some debug logging, simply since it shouldn't hurt in
general, and will likely be useful for the next bug we see in the
transactor.
2018-05-01 15:34:44 -07:00
Nick Alexander
7960b4ccd2 Pre: Get ready to use log in mentat_db.
We don't yet have a logging system for production use, but I'd like to
start experimenting with log, which seems to be (close to) a Rust
standard.  We're already using it in mentat_cli.
2018-05-01 13:46:03 -07:00
Nick Alexander
d4a635f4e7 (tx) Replace :db/tx with (current-tx) transaction function and broaden support. (#664) r=rnewman 2018-04-26 19:33:16 -07:00
Nick Alexander
32ed56685e (tx) Replace :db/tx with (transaction-tx) transaction function and broaden support. (#664)
:db/tx (and Datomic's version, :datomic/tx) suffer from the same
ambiguities that [a v] lookup references do -- determining the type of
the result is context sensitive.  (In this case, is :db/tx a reference
to the current transaction ID, or is it a valid keyword?)  This commit
addresses the ambiguity by introducing a notion of a transaction
functions, and provides a little scaffolding for adding more (should
the need arise).  I left the scaffolding in place rather than handling
just (transaction-tx) because I started trying to
implement (transaction-instant) as well, which is more difficult --
see the comments.

It's worth noting that this approach generalizes more or less directly
to ?input variables, since those can be eagerly bound like the
implemented transaction function (transaction-tx).
2018-04-26 19:32:14 -07:00
Richard Newman
f979044ba1
Refactor value type boxing. (#659) r=nalexander
* Pre: eliminate some occurrences of Rc, largely through the magic of Into.
* Pre: introduce FromRc to convert between refcounted types.
* Introduce ValueRc as an abstraction over Rc/Arc choice.
* Move Cloned to core.
* Move CString-creation methods to TypedValue.
* Finish transition.
2018-04-25 14:23:27 -07:00
Richard Newman
a2e13f624c
Add 'Binding', a structured value type to return from queries. (#657) r=nalexander
Bump to 0.7: breaking change.
2018-04-24 15:08:38 -07:00
Richard Newman
1818e0b98e Split mentat_core TypedValue code into separate files for clarity. 2018-04-24 15:05:04 -07:00
Richard Newman
a74a2deffc
Introduce RelResult rather than Vec<Vec<TypedValue>>. (#639) r=nalexander
* Pre: clean up core/src/lib.rs.
* Pre: use indexmap 1.0 in db and query-projector.
* Change rel results to be a RelResult instance, not a Vec<Vec<TypedValue>>.

This avoids memory fragmentation and improves locality by using a single
heap-allocated vector for all bindings, rather than a separate
heap-allocated vector for each row.

We hide this abstraction behind the `RelResult` type, which tracks the
stride length (width) of each row.

* Don't allocate temporary vectors when projecting RelResults.
2018-04-24 15:04:00 -07:00
Nick Alexander
0c31fc7875 (query) Implement tx-log API: (tx-ids ...) and (tx-data ...) functions. r=rnewman 2018-04-19 09:59:05 -07:00
Nick Alexander
c8da4be38f (query) Implement tx-log API: (tx-ids ...) and (tx-data ...) functions.
`tx-ids` allows to enumerate transaction IDs efficiently.

`tx-data` allows to extract transaction log data efficiently.

We might eventually allow to filter by impacted attribute sets as well.
2018-04-19 09:58:41 -07:00
Nick Alexander
e532614908 (query) Pre: Model columns that don't have type tags closer to Column. 2018-04-19 09:58:41 -07:00
Nick Alexander
36eca0bfb0 (chore) Pre: Use the same features of uuid throughout the project. 2018-04-19 09:58:41 -07:00
Richard Newman
1f1818448a
Begin adding worked examples. (#629) r=nalexander 2018-04-17 10:39:36 -07:00
Richard Newman
8ca657ec03 Simplify vocabulary migration test.
We don't need to explicitly retract if the transactor will do it
for us -- which it will if an attribute is cardinality-one.
2018-04-12 11:53:19 -07:00
Richard Newman
1509d16c3e
Fix (the ?foo) (#633) r=nalexander
Don't group by ?var when processing (the ?var).

This PR also finishes error generation in the projector.
2018-04-10 11:58:58 -07:00
Richard Newman
909b2a8be5 Refactoring: split up the projector crate. No other code changes. 2018-04-09 10:26:09 -07:00
Richard Newman
39f1d61175 Bump version to 0.6.2. 2018-04-09 09:47:49 -07:00
Richard Newman
3f8464e8ed Implement vocabulary-driven schema upgrades. (#595) r=emily 2018-04-09 09:47:49 -07:00
Richard Newman
a5cda7c3e9 Allow passing a TermBuilder to be transacted by InProgress; add TermBuilder::is_empty. r=emily 2018-04-09 09:47:49 -07:00
Richard Newman
27dde378e0 Allow retraction of some schema attributes. (#379) r=nalexander 2018-04-09 09:47:49 -07:00
Emily Toop
175958e754 Address review comments @rnewman 2018-04-06 10:46:15 +01:00
Emily Toop
19ddf9c384 Spacing 2018-04-06 10:46:15 +01:00
Emily Toop
fa7dd2ceab Add FFI for query building 2018-04-06 10:46:15 +01:00
Emily Toop
9741435026 Add helper functions for FFI. Many of these will go away as we expose the entity builder 2018-04-06 10:46:15 +01:00
Emily Toop
7382e3297d Add QueryBuilder to make querying over FFI easier 2018-04-06 10:46:15 +01:00
Emily Toop
b3e27d86a9 Add converter functions from TypedValue to underlying type 2018-04-06 10:46:15 +01:00
Richard Newman
a57f7aff99
Add specialized tx-before and tx-after predicates. (#599) r=emily 2018-04-05 10:49:06 -07:00
Richard Newman
8607ecb745 Fix merge conflict. 2018-04-03 14:54:46 -07:00
Richard Newman
4d8e179a59
Expose component_attributes on Schema. (#623) r=nalexander
Some parts of the query engine and transactor need to know whether an
attribute is a component attribute, and sometimes want to do so in
a generated SQL query. This is one way to do that.
2018-04-03 14:25:53 -07:00
Richard Newman
558906df4f
Fix: db/component should be db/isComponent. (#624) r=nalexander 2018-04-03 14:25:28 -07:00
Richard Newman
6c54e1d370
Support :db/noHistory for attributes. (#622) r=nalexander
At this point we never discard history, but this completes the API support for doing so.
2018-04-03 14:23:46 -07:00
Richard Newman
9cc5cbf288
Rename the helpful variant, AttributeBuilder::new, to AttributeBuilder::helpful. (#625) r=nalexander 2018-04-03 14:23:20 -07:00
Richard Newman
ca451a7c9c Silence a warning in Tolstoy. 2018-04-03 14:01:29 -07:00
Richard Newman
66b892572c
Don't create a CommandExecutor if there are no observers. (#603) (#604) r=emily
* Don't create a CommandExecutor if there are no observers. (#603)
* Don't log if our executor channel goes away. This is routine.
2018-04-03 09:18:22 -07:00
Emily Toop
9f30fe6295 Create Mentat FFI and expose observers (#574)
* Tidy up and add txid at beginning of transaction

* Add ffi crate and new_store function

* Add register and unregister observer FFI, Store and Conn functions.
Also add android logging facilities

* Add function for fetching entids for attribute strings

* Add functions for iterating through TxReports

* Add sync to ffi boundary

* Move Extern types from submodule to lib in FFI.
For some reason, if these types are in a submodule, even if they are publically used, the functions inside the FFI are not found in
Android. Works for iOS though. To be investigated later....

* Return to passing TxReports to observer function.
Also, remove some debug

* Expose DateTime and Utc publically

* Use Store in observer tests
2018-03-20 19:16:32 +00:00
Emily Toop
ab957948b4 Move to using watcher.
Simplify.

This has a watcher collect txid -> AttributeSet mappings each time a
transact occurs. On commit we retrieve those mappings and hand them over
to the observer service, which filters them and packages them up for
dispatch.

Tidy up
2018-03-20 16:27:35 +00:00
Emily Toop
d4365fa4cd Execute commands in a separate thread
Command Queue Executor to watch for new commands and execute on longer running background thread
2018-03-20 16:27:35 +00:00
Emily Toop
ecc4a7a35a Add tests 2018-03-20 16:27:35 +00:00
Emily Toop
c2e5052877 Allow registration and unregistration of transaction observers from Conn 2018-03-20 16:27:35 +00:00
Emily Toop
9f3d2c08b2 Populate changeset of attributes inside TxReport during transact.
Batch up TxReports for entire transaction.
Notify observers about committed transaction.
Store transaction observer service inside Conn
2018-03-20 16:27:35 +00:00
Emily Toop
8d60f2b3d1 Expose tx observation publicly 2018-03-20 16:27:35 +00:00
Emily Toop
dfa0d3e321 Add TxObservers, Commands and TxObservationService.
These are the base types upon which we will build our transaction observation
2018-03-20 16:27:35 +00:00
Richard Newman
f71b2b207e Expand the README to give a guide to the crates in the repo. 2018-03-19 14:35:18 -07:00
Richard Newman
16a66517e4
Err, don't panic, on unbound variable (#518) (#590) r=emily
* Pre: switch a 'panic' to an 'unreachable'.
* Make candidate_type_column fallible. (#518)
2018-03-15 11:39:24 -07:00
Richard Newman
994a3e65e2
Tests and fixes for aggregates over different or unknown types. (#588) r=emily 2018-03-15 07:14:06 -07:00
Richard Newman
df58de52f4
Correctly parse and unescape quotes etc. inside EDN strings. (#434) (#589) 2018-03-15 07:13:27 -07:00
Richard Newman
ea52e214af
Small README tweaks. 2018-03-13 04:01:19 +00:00
Richard Newman
833ff92436
Simple aggregates. (#584) r=emily
* Pre: use debugcli in VSCode.
* Pre: wrap subqueries in parentheses in output SQL.
* Pre: add ExistingColumn.

This lets us make reference to columns by name, rather than only
pointing to qualified aliases.

* Pre: add Into for &str to TypedValue.
* Pre: add Store.transact.
* Pre: cleanup.
* Parse and algebrize simple aggregates. (#312)
* Follow-up: print aggregate columns more neatly in the CLI.
* Useful ValueTypeSet helpers.
* Allow for entity inequalities.
* Add 'differ', which is a ref-specialized not-equals.
* Add 'unpermute', a function for getting unique, distinct pairs from bindings.
* Review comments.
* Add 'the' pseudo-aggregation operator.

This allows for a corresponding value to be returned when a query
includes one 'min' or 'max' aggregate.
2018-03-12 15:18:50 -07:00
Richard Newman
46835885e4 Add help for timer command. 2018-03-06 10:49:41 -08:00
Richard Newman
3e615becd8 Fix printing of fractional millisecond timestamps. (#582) r=emily 2018-03-06 10:49:41 -08:00
Richard Newman
1817ce7c0b Performance and cleanup. r=emily
* Use fixed-size arrays for bootstrap datoms, not vecs.
* Wide-ranging cleanup.

    This commit:
    - Deletes some dead code.
    - Marks some functions only used by tests as cfg(test).
    - Adds pub(crate) to a bunch of functions.
    - Cleans up a few other nits.
2018-03-06 09:03:00 -08:00
Richard Newman
a0c70a7cd9 Add caching example to Seattle fixture. 2018-03-06 09:02:11 -08:00
Richard Newman
f42ae35b70 Update cache on write. (#566) r=emily
* Use the cache to make constant queries super fast.
* Fix translate tests to match: we no longer generate SQL for many of them!
* Accumulate additions and removals into the cache.
    * Make attribute cache clone-on-write; store it in Metadata.
    * Allow caching of fulltext attributes, interning strings.
2018-03-06 09:01:20 -08:00
Richard Newman
bead9752bd Add InProgress.import to load transaction data from a file. r=emily
It's not streaming, but it'll do.
2018-03-06 08:59:30 -08:00
Richard Newman
0ca61017a1 Use LTO for release builds. 2018-03-06 08:19:17 -08:00
Richard Newman
4a0b67ab50 Require Rust 0.1.24 or higher. 2018-03-06 08:19:17 -08:00
Richard Newman
9b23cf3945
Speed up EDN parser (fixes #445) (#581) r=nalexander
Fixes from @kevinmehall.

* Prefer character sets over backtracking in the EDN parser.
* Avoid duplicate effort when parsing floats in the EDN parser.
* Clean up duplicate position tracking code.

This turns out to have little performance impact, but makes the grammar
much cleaner.

* Fix EDN work to pass tests with correct numeric precedence.
2018-03-05 20:33:51 -08:00
Richard Newman
30bf827d16
CLI improvements (#577) r=grisha
* Add a prepared query command to CLI.
* Print nanoseconds in the REPL. This is a good problem to have.
* Better CLI timing.
* Use release for 'cargo cli', debug for 'cargo debugcli'.
* Don't enable debug symbols in release builds.
* Clean up CLI code. Fixed order for help.
* Column-align help output.
2018-03-05 12:52:20 -08:00
Richard Newman
d46535a7c2
When an attribute is known-fulltext, don't hit AllDatoms. (#576) r=nalexander 2018-03-05 10:09:53 -08:00
Grisha Kruglov
36d455150d
Disable TLS support, add links to issues for TODOs (#573) r=grisha/self
Landing this now to unblock Android builds of mentat until the cross-compilation of dependencies is figured out.
2018-02-28 15:54:46 -08:00
Richard Newman
5e50d2a9b4 Update to rusqlite 0.13. 2018-02-22 11:41:57 -08:00
Richard Newman
39cf26aa76 Add import command. (#456) r=emily 2018-02-21 11:51:45 -08:00
Richard Newman
c53da08a00 Add VSCode command to attach to running CLI. 2018-02-21 11:51:45 -08:00
Richard Newman
23ebc3a5fc Add Datomic's Seattle neighborhood test data. 2018-02-21 11:51:45 -08:00
Richard Newman
20d1b45293 Follow-up: replace println_stderr with eprintln. 2018-02-21 11:51:45 -08:00
Richard Newman
54bd883c65 Follow-up: remove logging and such elsewhere in the codebase. 2018-02-21 11:51:45 -08:00
Richard Newman
e33fe71c47 Rework caching and use it inside the query engine. (#553) r=emily
This puts caching in mentat_db, adds a reverse lookup capability for
unique attributes, and populates bidirectional caches with a single
SQL cursor walk.

Differentiate between begin_read and begin_uncached_read.

Note that we still allow toggling within InProgress, because there might be
transient local state that makes starting a new transaction impossible.
2018-02-21 11:51:45 -08:00
Richard Newman
df3cdb5db6 Allow two datoms in the same transaction to have the same fulltext string. (#565) r=emily 2018-02-21 11:51:45 -08:00
Richard Newman
ae91603bd0 Provide an API for creating truly empty stores (#561) r=grisha
* Part 1: split create_current_version.

* Part 2: add Store::create_empty and Conn::empty.

* Part 3 - Expose 'open_empty' command via CLI
2018-02-16 02:01:00 -08:00
Grisha Kruglov
93e5dff9c8
Revised uploader flow (battle-tested); CLI support for sync (#557) r=rnewman 2018-02-16 01:44:28 -08:00
Emily Toop
c84db82575
Merge pull request #560 from mozilla/fluffyemily/entity-builder-update
Add retract_kw
2018-02-15 18:48:39 +00:00
Emily Toop
48ffa20d4c Add retract_kw 2018-02-15 18:47:58 +00:00
Emily Toop
9655c4b85d Add retract to entity builder. (#559) r=rnewman 2018-02-15 09:44:06 -08:00
Richard Newman
01d9b83a9b
Add EntityBuilder.add_kw. (#558) r=emily
* Add EntityBuilder.add_kw.

This allows you to skip your own attribute lookups, at the cost of
potentially doing the work more than once.

Also does value type checking.
2018-02-15 08:22:34 -08:00
Richard Newman
0dcb7df1c7
Timing and colorizing in the CLI. (#552) r=grisha
* Add basic coloring of CLI output.
* Add a timer to the CLI.

Toggle it on or off with 'timer on' and 'timer off'.
Output is colorized.

* Add VSCode configuration files.

These allow you to build and run the CLI, build Mentat, or run all tests.
2018-02-15 07:36:18 -08:00
Emily Toop
dec86bb2c5
publically expose KnownEntid (#554) 2018-02-14 19:06:50 +00:00
Richard Newman
23e11fabe6
Add a var! macro. (#548)(#550) r=emily 2018-02-14 09:32:37 -08:00
Richard Newman
2ac7a1b1de Add a feature flag to control the use of rusqlite's bundled SQLite. r=emily
You can use this in conjunction with setting SQLITE3_LIB_DIR to control which SQLite is used.

See https://github.com/jgallagher/rusqlite for more.

Also add recent contributors to the authors array.
2018-02-13 08:25:58 -08:00
Grisha Kruglov
84f29676e8
"Unchanged server" uploader flow (#543) r=rnewman
* Remove unused struct from tx_processor

* Derive serialize & deserialize for TypedValue

* First pass of uploader flow + feedback
2018-02-09 09:55:19 -08:00
Richard Newman
d11810dca7 Fix warning in tx.rs. 2018-02-09 08:51:13 -08:00
Kit Cambridge
a6341f6fd6 Implement q_prepare with pre-bound variables. r=rnewman 2018-02-07 21:48:05 -08:00
Emily Toop
715d434945
Create generalized in-memory cache for attributes (#525)
* Nit: Alphabetical ordering of imports

* Create Cache and provide functions for calling it

* Get tests working. Move to using NamespacedKeyword over KnownEntid in function signature

* Add is_cached check to caching tests

* Move lazy and add/remove boolean flags to enums

* Move function definitions into generic trait and implement trait for AttributeCache

* Remove lazy cache and generalize cache

* Update tests

* Eager cache becomes simple key value store. AttributeMap handles attribute storing specifics

* Update tests to test presence of correct values in cache

* Move EagerCache, AttributeValueProvider and ValueProvider into mentat_db

* Add test for get_for_entid

* Add test for lookup attribute

* Make caches cloneable. Add value_for alongside values_for

* Use cache in attribute lookups

* Split test for values and value and add cardinality

* address review feedback r=rnewman
2018-02-07 10:56:12 -08:00
Grisha Kruglov
d848d954cf Issue 508 - Iterating transcation processor r=rnewman
Review comments
2018-02-06 12:24:12 -08:00
Richard Newman
e9ed103723 Revert "Make debug structs and functions non-public. r=grisha"
This reverts commit e817f67470.
2018-02-06 11:06:06 -08:00
Richard Newman
e817f67470 Make debug structs and functions non-public. r=grisha 2018-02-06 10:25:42 -08:00
Richard Newman
66e6fef75e Define Store, use TabWriter in the CLI for aligning columnar output. (#540) r=emily
* Define Store, which is a simple container for a SQLite connection and a Conn.
  This is a breaking change.
* Return the FindSpec as part of QueryOutput, not just results.
* Switch to using stderr in appropriate places in CLI.
* Print columns in CLI output.
2018-02-01 09:29:07 -08:00
Richard Newman
37a7c9ea48 Validate attributes installed after open. (#538) r=emily
Make AttributeBuilder optionally helpful, fix tests.
2018-02-01 09:29:04 -08:00
Richard Newman
2614f498be Ergonomics improvements, including a kw macro. (#537) r=emily
* Add TypedValue::instant(micros).
* Add From<f64> for TypedValue.
* Add lookup_values_for_attribute to Conn.
* Add q_explain to Queryable.
* Expose an iterator over FindSpec's columns.
* Export edn from mentat crate. Export QueryExecutionResult.
* Implement Display for Variable and Element.
* Introduce a `kw` macro.

    This allows you to write:

    ```rust
    kw!(:foo/bar)
    ```

    instead of

    ```rust
    NamespacedKeyword::new("foo", "bar")
    ```

    … and it's more efficient, too.

Add `mentat::open`, eliminate use of `mentat_db` in some places.
2018-02-01 09:27:23 -08:00
Edouard Oger
3bf7459315 Allow customers to assert facts about the current transaction. (#225) r=rnewman
Also move `now` into core, implement microsecond truncation.

This is so we don't return a more granular -- and thus subtly different --
timestamp in a `TxReport` than we put into the store.
2018-01-29 16:46:04 -08:00
Thom Chiovoloni
98502eb68f Implement type annotations in queries. (#526) r=rnewman 2018-01-29 14:37:53 -08:00
Richard Newman
ef9f2d9c51 Don't allow violation of cardinality-one restrictions within a single tx. (#531) r=nalexander 2018-01-23 15:11:38 -08:00
Richard Newman
6ed5413cd4 Implement simple vocabulary management. (#504) r=emily,nalexander
Bump version to 0.5.1 to reflect this change.
2018-01-23 08:52:15 -08:00
Richard Newman
812f10b3e4 Add an EntityBuilder abstraction. r=nalexander,emily
This includes two other changes:

* Split transact to expose an interface for TermWithTempIds.
* Return TxReport from each InProgress operation, not from commit.
2018-01-23 08:52:09 -08:00
Richard Newman
3d28949add Describe the default core schema, v1 (:db.schema/core). r=nalexander 2018-01-23 08:51:58 -08:00
Richard Newman
4acc6d0658 InProgressRead, KnownEntid. r=nalexander,emily
Improve naming of read-only transactions.
    Implement entid_for_type.
    Simplify get_attribute.
    Name ignored var in algebrizer.
    Comment attribute_for_ident.
    Make KnownEntid a core concept.
    Expose lookup_value_for_attribute.
    Implement HasSchema and a new query encapsulation on Conn.
    Pre: export Queryable.
2018-01-23 08:40:18 -08:00
Richard Newman
6797a606b5 Preliminary work for vocabulary management. r=emily,nalexander
Pre: export AttributeBuilder from mentat_db.
Pre: fix module-level comment for tx/src/entities.rs.
Pre: rename some `to_` conversions to `into_`.
Pre: make AttributeBuilder::unique less verbose.
Pre: split out a HasSchema trait to abstract over Schema.
Pre: rename SchemaMap/schema_map to AttributeMap/attribute_map.
Pre: TypedValue/NamespacedKeyword conversions.
Pre: turn Unique and ValueType into TypedValue::Keyword.
Pre: export IntoResult.
Pre: export NamespacedKeyword from mentat_core.
Pre: use intern_set in tx.
Pre: add InternSet::len.
Pre: comment gardening.
Pre: remove inaccurate TODO from TxReport comment.
2018-01-23 08:25:32 -08:00
Richard Newman
224570fb45 Switch InProgress to be mutated in place. r=nalexander,emily
This is a breaking change, and involves a very small additional cost
in managing the partition map, but it makes it much more feasible to
implement traits on InProgress: now they don't need to chain back a
new InProgress each time.

Bump version to 0.5 to reflect the change in InProgress.
2018-01-23 08:14:13 -08:00
Fernando Jiménez Moreno
50a9e0c21f Add link to CQRS resources. (#534) r=rnewman
Not everyone knows what CQRS is (I had to Google it) :)
2018-01-23 09:49:49 -06:00
Richard Newman
ab5b67ecf7
Tweak README to address some feedback from zbraniecki. 2018-01-22 17:35:36 +00:00
Emily Toop
8cc7f0b72e
Expose Uuid from the top level crate (#529) 2018-01-22 16:29:25 +00:00
Thom
1fcc3d7e1b
Add support for Ctrl-C aborting the current command on the CLI (#528) r=rnewman 2018-01-21 09:45:40 -05:00
Thom
9740cafdbd Automatically remove trailing whitespace from text files. (#527) r=rnewman
This was done using the following shell script:

```
find . -type f -not -path "*target*" \
       '(' -name '*.rs' -o -name '*.md' -o -name '*.toml' ')' -print0 | \
    xargs -0 sed -i '' -E 's/[[:space:]]*$//'
```

Which is admittedly imperfect, but manages to hit everything that was a problem in this repo.
2018-01-19 21:21:04 -06:00
Thom
023fd9b70b
Add a command to the CLI that displays the SQL and query plan for a query. Fixes #428. (#523) r=rnewman 2018-01-19 19:44:39 -05:00
Richard Newman
ebb77d59bc
Ensure that DateTime values are truncated to microsecond precision. (#522) r=emily 2018-01-18 09:06:23 -06:00
Thom
579829d091 CLI quality-of-life fixes. (#521) r=rnewman
* CLI: Update linefeed library to latest version
* CLI: Don't store incomplete commands in history
* CLI: add curly braces to word separators
* Review comment: clean up CLI add_history.

Signed-off-by: Thom Chiovoloni <tchiovoloni@mozilla.com>
2018-01-17 22:33:22 -06:00
Richard Newman
95e95d735e
Correct an assert relating Datalog projection and SQL column counts. (#519) r=tcsc
* Correct an assert relating Datalog projection and SQL column counts. (#517)
* Fix a comment that shouldn't be a doc comment.
2018-01-17 16:36:15 -06:00
Grisha Kruglov
1589104841
Force SQLite temp files to be stored in memory (#505) r=rnewman (#506)
* Force SQLite temp files to be stored in memory (#505) r=rnewman
2017-12-20 01:52:09 -05:00
Grisha Kruglov
c61bc79b99 Sync metadata schema and SyncMetadataClient. (#502) r=rnewman 2017-12-13 14:19:05 -06:00
Richard Newman
e8ec59e464
Implement a simple direct lookup API. Fixes #111 (#503) r=grisha
* Add some helpers and refactor how queries are run (once).
* Implement lookup_value_for_attribute.
* Add a multi-value test for lookup_value_for_attribute.
2017-12-11 11:08:10 -08:00
Richard Newman
b7fb44a5a6 Add a comment to InProgress. 2017-12-07 12:19:59 -08:00
Richard Newman
a3b8fd3022 Add results type unwrapping helpers. 2017-12-06 14:47:29 -08:00
Richard Newman
75bcb76dd5 Be slightly more specific about mentat_core exporting Uuid. 2017-12-06 12:42:59 -08:00
Richard Newman
1f0c4e3107 Add common From coercions for TypedValue. 2017-12-06 12:29:11 -08:00
Emily Toop
2fc4cb5a2d Clear the CLI buffer when an incorrect command is added. (#500) r=rnewman
* Fix issue whereby when an incorrect command was entered, the buffer wasn't cleared and the next command was appended on the end of the incorrect one.

* Cleanup.
2017-12-05 14:10:10 -08:00
Richard Newman
95b9c7f7f5
Atomic multi-tx (#489). r=emily,nalexander
* Pre: rename begin_transaction to begin_tx_application.

* Take an EXCLUSIVE transaction when bootstrapping, and an IMMEDIATE transaction when writing.

This avoids the remote possibility of another write sneaking in the door
while we're preparing to write, avoids us needing to upgrade locks, etc.

  After a BEGIN IMMEDIATE, no other database connection will be able to write
  to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes
  can continue to read from the database, however.

  An exclusive transaction causes EXCLUSIVE locks to be acquired on all
  databases. After a BEGIN EXCLUSIVE, no other database connection except for
  read_uncommitted connections will be able to read the database and no other
  connection without exception will be able to write the database until the
  transaction is complete.

* Hacky implementation of atomic multi-tx.

* Hold the last report, returning the InProgress from each operation.

* Rewrite transact in terms of InProgress.

* Test rollback.

* Remove unused imports.

* Don't use Rc for transaction reports.

* Pre: break out USER0 as a part boundary constant.

* Export TX0 and USER0 from mentat_db. This is for testing.

* Review comments: commenting.

* Test tempid allocation and rollback.
2017-12-05 07:58:24 -08:00
Richard Newman
1b72f5bbb6
Example databases (#499). r=emily
* Update fixtures to match the current storage schema.

The existing files were a little misleading. This commit moves them into an 'old'
directory, and creates a new 'v1empty.db'.

* Add empty example Toodle database.

* Rename :item/labels to :item/label in Toodle schema.

* Define 'cargo cli' as an alias to run the Mentat CLI.
2017-12-05 07:42:19 -08:00
Richard Newman
03fee722e9 Add etoop as an author. 2017-12-04 13:10:05 -08:00
Richard Newman
df90c366af
Partial work from simple aggregates work (#497) r=nalexander
* Pre: make FindQuery, FindSpec, and Element non-Clone.
* Pre: make query translator return a Result.
* Pre: make projection return a Result.
* Pre: refactor query parser in preparation for parsing aggregates.
* Pre: rename PredicateFn -> QueryFunction.
* Pre: expose more about bound variables from CC.
* Pre: move ValueTypeSet to core.
2017-11-30 15:02:07 -08:00
Emily Toop
55588209c2
CLI (#493)
* Create mentat command line.
* Create tools directory containing new crate for mentat_cli.
* Add simple cli with mentat prompt.

* Remove rustc-serialize dependency

* Open DB inside CLI (#452) (#463)

* Open named database OR default to in memory database if no name provided

Rearrange workspace to allow import of mentat crate in cli crate

Create store object inside repl when started for connecting to mentat

Use provided DB name to open connection in store

Accept DB name as command line arg.

Open on CLI start

Implement '.open' command to open desired DB from inside CLI

* Implement Close command to close current DB.
* Closes existing open db and opens new in memory db

* Review comment: Use `combine` to parse arguments.

Move over to using Result rather than enums with err

* Accept and parse EDN Query and Transact commands (#453) (#465)

* Parse query and transact commands

* Implement is_complete for transactions and queries

* Improve query parser. Am still not happy with it though.

There must be some way that I can retain the eof() after the `then` that means I don't have to move the skip on spaces and eof

Make in process command storing clearer.

Add comments around in process commands.
Add alternative commands for transact/t and query/q

* Address review comments r=nalexander.

* Bump rust version number.
* Use `bail` when throwing errors.
* Improve edn parser.
* Remove references to unused `more` flag.
* Improve naming of query and transact commands.

* Send queries and transactions to mentat and output the results (#466)

* Send queries and transactions to mentat and output the results

move outputting query and transaction results out of store and into repl

* Add query and transact commands to help

* Execute queries and transacts passed in at startup

* Address review comments =nalexander.

* Bump rust version number.
* Use `bail` when throwing errors.
* Improve edn parser.
* Remove references to unused `more` flag.
* Improve naming of query and transact commands.

* Execute command line args in order

* Addressing rebase issues

* Exit CLI (#457) (#484) r-rnewman

* Implement exit command for cli tool

* Address review comments r=rnewman

* Include exit commands in help

* Show schema of current DB (#487)

* Fixing rebase issues

* addressing nit

* Match updated dependencies on CLI crate and remove unused import
2017-11-21 16:56:16 +00:00
Richard Newman
c600152d78
Update some dependencies. (#492) r=etoop
* Update some dependencies.

* Update rusqlite to 0.12.

* Update error-chain to a forked version that implements Sync.

* Fix some compiler warnings.

* Remove unused imports in tests.

* Parse errors no longer naturally print with the expected symbol.
2017-11-21 16:24:08 +00:00
Emily Toop
c15973f269 Support tx places in queries (#485) r=rnewman
* Support tx places in queries
2017-06-28 18:20:16 +01:00
Richard Newman
d1ad3c47f7 Follow-up: clean up imports. 2017-06-16 13:32:23 -07:00
Richard Newman
eaf3e7fc4b Extend inequalities to Instants. (#439) r=fluffyemily,nalexander 2017-06-16 11:57:44 -07:00
Richard Newman
ea0e9d4c7b Allow instants to pass through schema validation. (#481) r=fluffyemily
* Allow instants to pass through schema validation.
* Expand cases in SchemaTypeChecking to catch enum bugs.
2017-06-16 09:15:29 -07:00
Richard Newman
aa5f569df5 There are one million microseconds in a second, not one hundred thousand. (#480) r=fluffyemily 2017-06-16 08:00:14 -07:00
Richard Newman
20aa11dcbd Support variable fulltext searches. (#479) r=nalexander 2017-06-15 10:32:46 -07:00
Richard Newman
dd39f6df5b Implement fulltext. (#477) r=nalexander 2017-06-15 10:32:40 -07:00
Richard Newman
3f264e9eb2 Implement fulltext. (#477) r=nalexander
* You can't use fulltext search on a non-fulltext attribute.
* Allow for implicit placeholder bindings in fulltext.
2017-06-15 10:28:11 -07:00
Richard Newman
565a0e9ff9 Implement MATCHES throughout SQL machinery. 2017-06-15 10:28:10 -07:00
Richard Newman
17c59bbff6 Apply newly bound values to existing columns.
This commit lifts some logic out of the scalar ground handler to apply
elsewhere.

When a new value binding is encountered for a variable to which column
bindings have already been established, we do two things:

- We apply a new constraint to the primary column. This ensures that the
  behavior for ground-first and ground-second is equivalent.
- We eliminate any existing column type extraction: it won't be
  necessary now that a constant value and constant type are known.
2017-06-15 10:28:09 -07:00
Richard Newman
f7a3fd5b17 Refactor arg conversion and ground into separate files. 2017-06-15 10:28:07 -07:00
Richard Newman
54bdd382fb Add a test that late inputs aren't allowed in ground. 2017-06-15 10:28:05 -07:00
Richard Newman
c2ec1a6bdf Pre: move Either to mentat_core::util. 2017-06-15 10:28:02 -07:00
Richard Newman
03c0930285 Pre: implement IntoIterator for ValueTypeSet. 2017-06-15 10:27:51 -07:00
Richard Newman
5d5e85bcba Pre: ensure that constant floats end up as floats in SQL, never integers. 2017-06-15 10:27:16 -07:00
Richard Newman
8ec24f01f6 Handle ground. (#469) r=nalexander 2017-06-09 20:20:16 -07:00
Richard Newman
e1e549440f Expand type code when applying ground. (#475) 2017-06-09 20:18:53 -07:00
Nick Alexander
79fa0994b3 Part 3: Handle ground. (#469) r=nalexander,rnewman
This version removes nalexander's lovely matrix code. It turned out
that scalar and tuple bindings are sufficiently different from coll
and rel -- they can directly apply as values in the query -- that
there was no point in jumping through hoops to turn those single
values into a matrix.

Furthermore, I've standardized us on a Vec<TypedValue>
representation for rectangular matrices, which should be much
more efficient, but would have required rewriting that code.

Finally, coll and rel are sufficiently different from each other
-- coll doesn't require processing nested collections -- that
my attempts to share code between them fell somewhat flat. I had
lots of nice ideas about zipping together cycles and such, but
ultimately I ended up with relatively straightforward, if a bit
repetitive, code.

The next commit will demonstrate the value of this work -- tests
that exercised scalar and tuple grounding now collapse down to
the simplest possible SQL.
2017-06-09 20:18:31 -07:00
Nick Alexander
d04d22a6a6 Part 2: refactor projector to be reusable from translator.
This allows the translator to also use bound values in nested queries.
2017-06-09 20:16:39 -07:00
Nick Alexander
b9cbf92205 Part 1: Parse functions in where clauses. 2017-06-09 20:16:39 -07:00
Richard Newman
c6e933c396 Pre: make rule_vars return unique vars. 2017-06-09 20:16:39 -07:00
Richard Newman
d30ad428e8 Pre: take a dependency on maplit to allow BTreeSet literals. 2017-06-09 20:16:39 -07:00
Richard Newman
4a886aae17 Pre: derive Debug. 2017-06-09 20:16:38 -07:00
Richard Newman
9ac2b8c680 Pre: add ConjoiningClauses::known_type_set. 2017-06-09 20:16:38 -07:00
Richard Newman
899e5d0971 Pre: add ConjoiningClauses::bind_value. 2017-06-09 20:16:38 -07:00
Nick Alexander
13e27c83e2 Pre: Modify predicate implementation in preparation for functions that bind. 2017-06-09 20:16:38 -07:00
Nick Alexander
4d2eb7222e Pre: Generalize NonNumericArgument to InvalidArgument. 2017-06-09 20:16:37 -07:00
Nick Alexander
2f38f1e73e Pre: Make it easier to debug binding errors. 2017-06-09 20:16:37 -07:00
Nick Alexander
002c918c96 Pre: Move PushComputed up module hierarchy; make it public. 2017-06-09 20:16:37 -07:00
Richard Newman
70c5bcfa99 Pre: simplify values SQL expansion.
This uses `interpose` instead of manual looping.
2017-06-09 20:16:37 -07:00
Richard Newman
63574af7ac Pre: flatten the representation of VALUES.
A single vec that's traversed in chunks is more efficient than multiple
vecs… and this ensures that each sub-vec is the same size.
2017-06-09 20:16:37 -07:00
Nick Alexander
06bb8e99a7 Pre: Add Values to query-sql. 2017-06-09 20:16:36 -07:00
Nick Alexander
9fe31d443d Pre: Accept EDN vectors in FnArg arguments.
Datomic accepts mostly-arbitrary EDN, and it is actually used: for
example, the following are all valid, and all mean different things:
* `(ground 1 ?x)`
* `(ground [1 2 3] [?x ?y ?z])`
* `(ground [[1 2 3] [4 5 6]] [[?x ?y ?z]])`

We could probably introduce new syntax that expresses these patterns
while avoiding collection arguments, but I don't see one right now.
I've elected to support only vectors for simplicity; I'm hoping to
avoid parsing edn::Value in the query-algebrizer.
2017-06-09 20:16:36 -07:00
Nick Alexander
08534a1a3a Pre: Handle SrcVar. 2017-06-09 20:16:36 -07:00
Richard Newman
a10c6fc67a Pre: make ValueTypeSet Copy, as it only newtypes EnumSet, which is Copy. 2017-06-09 20:16:36 -07:00
Richard Newman
dbbbd220f9 Pre: add helpers to ValueTypeSet. 2017-06-09 20:16:35 -07:00
Richard Newman
9a12ced317 Don't allow callers to specify arbitrary new entity IDs. (#447) r=nalexander
This commit adds a check to the partition map that a provided entity ID
has been mentioned (i.e., is present in the start:index range of one of
our partitions).

We introduce a newtype for known entity IDs, using this internally in
the tx expander to track user-provided entids that have passed the above
check (and IDs that we allocate as part of tempid processing). This
newtype is stripped prior to tx assertion.

In order that DB tests can continue to write

  [:db/add 111 :foo/bar 222]

we add an additional fake partition to our test connections, ranging
from 100 to 1000.
2017-06-09 15:45:26 -07:00
Nick Alexander
5c5818069f Handle :attribute/_reverse in transactor. Fixes #187. r=rnewman 2017-06-08 10:33:09 -07:00
Nick Alexander
c165972684 Post: Reject at parse-time reversed attributes in direct notation with bad values.
This is an optimization that trades rejecting inputs earlier at the
cost of expressive error messages.  It should be possible to recover
the error messages, however.

This will reject input like `[:db/{add,retract} v :attribute/_reversed NOT-AN-ENTITY]`.
2017-06-08 10:30:31 -07:00
Nick Alexander
59a710f80f Review comments: another test, add unreversed(). 2017-06-08 10:30:31 -07:00
Nick Alexander
eb220528bf Post: Indent. 2017-06-08 10:30:31 -07:00
Nick Alexander
d88823e7c4 Handle :attribute/_reverse in transactor. Fixes #187
There are two broad approaches:

1) Handle reverse attribute notation dynamically, in the style that
   Datomic does.  This is the most flexible, but it's not a good fit
   given that we produce strongly typed output from the parser.
   Strongly typed input to the transactor has had many benefits, so I
   don't want to roll it back for a relatively unimportant feature
   like reverse notation -- especially not since Mentat does not
   require :db.install/_attribute to modify schema attributes.

2) Handle reverse attribute in the parser itself, so that we can
   produce strongly typed parser output while restricting the input.
   I implemented this first and discovered that it's very difficult to
   give sensible error messages in common cases.

In any case, the bulk of the code is the same between the two
approaches, and I wrote the tests for the dynamic version (with error
output), so that's what I'm rolling with.

This patch preserves the existing indentation, to highlight the
differences.  The next patch will indent.
2017-06-08 10:30:31 -07:00
Nick Alexander
0be78cf956 Pre: Extract entity_*_into_term_* helpers. 2017-06-08 10:30:31 -07:00
Nick Alexander
4b0881a957 Pre: Push bookkeeping into an InProcess struct. 2017-06-08 10:30:31 -07:00
Nick Alexander
05129cefbb Pre: Use ValueType rather than Attribute to convert edn::Value to TypedValue.
This is expedient now, but might require work in the future to achieve
better error messages.
2017-06-08 10:30:31 -07:00
Nick Alexander
2650fe163d Pre: Intern lookup_ref by reference. 2017-06-08 10:30:31 -07:00
Nick Alexander
a4fc04ea86 Pre: Crib map_{left,right} for Either. 2017-06-08 10:30:31 -07:00
Richard Newman
634b7a816b Dedupe SQL arguments. (#471) r=nalexander 2017-06-07 11:55:42 -07:00
Richard Newman
2c52346999 Review comment: generalize from Uuid SQL arguments to byte arrays. 2017-06-07 11:55:05 -07:00
Richard Newman
ed04083ceb Dedupe SQL arguments.
This isn't perfect -- we still need to clone in a couple of cases -- but it avoids us
passing duplicate strings down into SQLite whenever the same value is mentioned more
than once in a query.
2017-06-07 11:46:18 -07:00
Richard Newman
7fc0848cb0 Fix typo in README. 2017-06-06 19:01:27 +00:00
Richard Newman
a88375fc15 Update README for master switchover. 2017-06-06 11:10:49 -07:00
Nick Alexander
8a8fcedd1c Parse without copying streams. Fixes #436. #444. r=rnewman
We were accidentally quadratic, copying the tails of owned Vec
instances around.  This brings us down to the expected linear runtime.
2017-05-18 10:20:06 -07:00
Nick Alexander
409a2ea78f Post: Use choice instead of or. 2017-05-18 10:17:13 -07:00
Nick Alexander
d1ac752de6 Parse without copying; parse keyword maps using macros.
This is a big commit, but it breaks into two conceptual pieces.  The
first is to "parse without copying".  We replace a stream of an owned
collection of edn::ValueAndSpan and instead have a stream of a
borrowed collection of &edn::ValueAndSpan references.  (Generally,
this is represented as an iterator over a slice, but it can be over
other things too.)  Cloning such iterators is constant time, which
improves on cloning an owned collection of edn::ValueAndSpan, which is
linear time in the length of the collection and additional time
depending on the complexity of the EDN values.

The second conceptual piece is to parse keyword maps using a special
parser and a macro to build the parser implementations.  Before, we
created a new edn::ValueAndSpan::Map to represent a keyword map in
vector form; since we're working with &edn::ValueAndSpan references
now, we can't create an &edn::ValueAndSpan reference with an
appropriate lifetime.  Therefore we generalize the concept of
iteration slightly and turn keyword maps in map form into linear
iterators by flattening the value maps.  This is a potentially
obscuring transformation, so we have to take care to protect against
some failure cases.  (See the comments and the tests in the code.)

After these changes, parsing using `combine` is linear time (and
reasonably fast).
2017-05-18 10:17:13 -07:00
Nick Alexander
4fa57942d3 Pre: Move macros out of lib.rs.
It seems very subtle to use macros in tests: I needed to separate the
modules in order to control load order to get everything to work.
2017-05-18 10:17:13 -07:00
Richard Newman
953f9f7734 Remove server instructions from README. 2017-05-15 13:05:24 +00:00
Richard Newman
c95ec13ffe Begin moving web server to a separate crate. (#448) r=bgrins
This doesn't yet introduce a working Cargo.toml for 'mentatweb', but it
does allow RLS to build correctly without errors, and it reduces the
core library's dependency space, which is more important in the short
term.
2017-05-10 02:25:59 -07:00
Richard Newman
3d4615fb8c Allow opening a DB. (#462) r=fluffyemily 2017-05-09 09:42:35 -07:00
Richard Newman
1dc8a3eaa0 Add a test for exported symbols. 2017-05-03 15:57:09 -07:00
Richard Newman
059b9d1182 Expose mentat_core::{TypedValue,ValueType} and conn::{Conn,Metadata}. (#443) 2017-05-03 15:49:29 -07:00
Richard Newman
523d5ea5f1 Bump dependency versions. r=bgrins. (#441) 2017-05-03 12:53:16 -07:00
Richard Newman
daca8def57 UUIDs and instants. Fixes #44, #45, #426, #427. (#438) r=nalexander
* Pre: unused import in translate.rs.

* Part 2: take a dependency on rusqlite for query arguments.

* Part 1: flatten V2 schema into V1. Add UUID and URI.

Bump expected ident and bootstrap datom count in tests.

* Part 5: parse edn::Value::Uuid.

* Part 3: extend ValueType and TypedValue to include Uuid.

* Part 4: add Uuid to query arguments.

* Part 6: extend db to support Uuid.

* Part 8: add a tx-parser test for #f NaN and #uuid.

* Part 7: parse and algebrize UUIDs in queries.

* Part 1: parse #inst in EDN and throughout query engine.

* Part 3: handle instants in db.

* Part 2: instants never matches integers in queries.

* Part 4: use DateTime for tx_instants.

* Add a test for adding and querying UUIDs and instants.

* Review comments.
2017-04-28 20:11:55 -07:00
Emily Toop
bd389d2f0d Parse and Algebrize not & not-join. (#302) (Closes #303, #389, #422 ) r=rnewman
* Part 1 - Parse `not` and `not-join`

* Part 2 - Validate `not` and `not-join` pre-algebrization

* Address review comments rnewman.
* Remove `WhereNotClause` and populate `NotJoin` with `WhereClause`.
* Fix validation for `not` and `not-join`, removing tests that were invalid.
* Address rustification comments.

* Rebase against `rust` branch.

* Part 3 - Add required types for NotJoin.
* Implement `PartialEq` for
`ConjoiningClauses` so `ComputedTable` can be included inside `ColumnConstraint::NotExists`

* Part 4 - Implement `apply_not_join`

* Part 5 - Call `apply_not_join` from inside `apply_clause`

* Part 6 - Translate `not-join` into `NOT EXISTS` SQL

* Address review comments.

* Rename `projected` to `unified` to better describe the fact that we are not projecting any variables.
* Check for presence of each unified var in either `column_bindings` or `input_bindings` and bail if not there.
* Copy over `input_bindings` for each var in `unified`.
* Only copy over the first `column_binding` for each variable in `unified` rather than the whole list.
* Update tests.

* Address review comments.

* Make output from Debug for NotExists more useful

* Clear up misunderstanding. Any single failing clause in the not will cause the entire not to be considered empty

* Address review comments.

* Remove Limit requirement from cc_to_exists.
* Use Entry.or_insert instead of matching on the entry to add to column_bindings.
* Move addition of value_bindings to before apply_clauses on template.
* Tidy up tests with some variable reuse.
* Addressed nits,

* Address review comments.

* Move addition of column_bindings to above apply_clause.
* Update tests.

* Add test to ensure that unbound vars fail

* Improve test for unbound variable to check for correct variable and error

* address nits
2017-04-28 10:44:11 +01:00
Richard Newman
e64ee5864e Force newline at end of file in VSCode config. 2017-04-26 10:09:50 -07:00
Richard Newman
19fc7cddf1 [query] Widen known_types correctly in complex or. (#424) r=nalexander
* Part 1: define ValueTypeSet.

We're going to use this instead of `HashSet<ValueType>` so that we can clearly express
the empty set and the set of all types, and also to encapsulate a switch to `EnumSet`."

* Part 2: use ValueTypeSet.

* Part 3: fix type expansion.

* Part 4: add a test for type extraction from nested `or`.

* Review comments.

* Review comments: simplify ValueTypeSet.
2017-04-24 14:15:26 -07:00
Richard Newman
bc63744aba Add :limit to queries (#420) r=nalexander
* Pre: put query parts in alphabetical order.
* Pre: rename 'input' to 'query' in translate tests.
* Part 1: parse :limit.
* Part 2: validate and escape variable parameters in SQL.
* Part 3: algebrize and translate limits.
2017-04-19 16:16:19 -07:00
Brian Grinstead
cd860ae68d Add an initial benchmark for the tx-parser crate. (#406) (#413) r=nalexander 2017-04-19 13:54:24 -07:00
Richard Newman
bffefe7e6b Review comments for #418. 2017-04-18 13:50:58 -07:00
Richard Newman
aa14a71019 Parse :in, pass inputs through to querying. (#418) r=nalexander
This commit downgrades error_chain to 0.8.1 in order to fix trait bounds
on errors.
2017-04-18 13:20:00 -07:00
Nick Alexander
ff0147e89c Review comments: downgrade to error-chain 0.8.1 for Send + Sync bound; use combine::primitive::Error. 2017-04-18 13:19:50 -07:00
Richard Newman
60c082b61e Part 4: pass inputs through algebrizing and execution. (#418)
This also adds a test that an `UnboundVariables` error is raised if a
variable mentioned in the `:in` clause isn't bound.
2017-04-18 13:19:50 -07:00
Richard Newman
dfc846e483 Part 3: define keep_intersected_keys.
We'll use this to drop unneeded values from input maps, if lazy callers
reuse a general-purpose map for multiple queries.
2017-04-18 13:19:50 -07:00
Richard Newman
651308f721 Part 2: define a type to encapsulate query inputs.
This is for two reasons.

Firstly, we need to track the types of inputs, their values, and also
the input variables; adding a struct gives us a little more clarity.

Secondly, when we come to implement prepared statements, we'll be
algebrizing queries without having the values available. We'll be able
to do a better job of algebrizing, and also do more validating, if we
allow callers to specify the types of variables in advance, even if the
values aren't known.
2017-04-18 13:19:50 -07:00
Richard Newman
a9a82ea1a7 Part 1: parse :in.
We also at this point switch from using `Vec<Variable>` to
`BTreeSet<Variable>`. This allows us to guarantee no duplicates later;
we'll reject duplicates at parse time.
2017-04-18 13:19:50 -07:00
Richard Newman
01af45ab3f Pre: define Display for ValueType. 2017-04-18 13:19:50 -07:00
Richard Newman
8ddbc834ae Pre: take Variables instead of Strings in public API, for now. 2017-04-18 13:19:50 -07:00
Richard Newman
5718ce0155 Pre: add two checks to translate tests to fix unused var warning. 2017-04-18 13:19:50 -07:00
Richard Newman
5cd53aff44 Pre: unused imports. 2017-04-18 13:19:50 -07:00
Brian Grinstead
99b7e89116 Make struct Conn public. (#419) r=rnewman 2017-04-18 11:04:44 -07:00
Richard Newman
35d73d5541 Implement :order. (#415) (#416) r=nalexander
This adds an `:order` keyword to `:find`.

If present, the results of the query will be an ordered set, rather than
an unordered set; rows will appear in an ordered defined by each
`:order` entry.

Each can be one of three things:

- A var, `?x`, meaning "order by ?x ascending".
- A pair, `(asc ?x)`, meaning "order by ?x ascending".
- A pair, `(desc ?x)`, meaning "order by ?x descending".

Values will be ordered in this sequence for asc, and in reverse for desc:

1. Entity IDs, in ascending numerical order.
2. Booleans, false then true.
3. Timestamps, in ascending numerical order.
4. Longs and doubles, intermixed, in ascending numerical order.
5. Strings, in ascending lexicographic order.
6. Keywords, in ascending lexicographic order, considering the entire
   ns/name pair as a single string separated by '/'.

Subcommits:

Pre: make bound_value public.
Pre: generalize ErrorKind::UnboundVariable for use in order.
Part 1: parse (direction, var) pairs.
Part 2: parse :order clause into FindQuery.
Part 3: include order variables in algebrized query.

We add order variables to :with, so we can reuse its type tag projection
logic, and so that we can phrase ordering in terms of variables rather
than datoms columns.

Part 4: produce SQL for order clauses.
2017-04-17 11:30:31 -07:00
Richard Newman
64acc6a7ee Support :with (#311) (#414) r=nalexander
* Pre: refactor projector code.
* Part 1: maintain 'with' variables in AlgebrizedQuery.
* Part 2: include necessary 'with' variables in SQL projection list.

The test produces projection elements for `:with`, even though there are
no aggregates in the query. This test will need to be adjusted when we
optimize this away!
2017-04-17 09:23:55 -07:00
Richard Newman
d8f761993d Implement complex or joins. (#410) r=nalexander 2017-04-12 19:23:40 -07:00
Richard Newman
758ab8b476 Part 5: add more tests for complex or. 2017-04-12 19:21:56 -07:00
Richard Newman
bca8b7e322 Part 4: correct projection of type tags in the outermost projector. 2017-04-12 19:21:56 -07:00
Richard Newman
d8075aa07d Part 3: finish expansion and translation of complex or.
This commit turns complex `or` -- `or`s in which not all variables are
unified, or in which not all arms are the same shape -- into a
computed table.

We do this by building a template CC that shares some state with the
destination CC, applying each arm of the `or` to a copy of the template
as if it were a standalone query, then building a projection list and
creating a `ComputedTable::Union`. This is pushed into the destination
CC's `computed_tables` list.

Finally, the variables projected from the UNION are bound in the
destination CC, so that unification occurs, and projection of the
outermost query can use bindings established by the `or-join`.

This commit includes projection of type codes from heterogeneous `UNION`
arms: we compute a list of variables for which a definite type is
unknown in at least one arm, and force all arms to project either a type
tag column or a fixed type. It's important that each branch of a UNION
project the same columns in the same order, hence the projection of
fixed values.

The translator is similarly extended to project the type tag column name
or the known value_type_tag to support this.

Review comment: clarify union type extraction.
2017-04-12 19:21:45 -07:00
Richard Newman
08d2c613a4 Part 2: expand the definition of a table to include computed tables.
This commit:

- Defines a new kind of column, distinct from the eavt columns in
  `DatomsColumn`, to model the rows projected from subqueries. These
  always name one of two things: a variable, or a variable's type tag.
  Naturally the two cases are thus `Variable` and `VariableTypeTag`.
  These are cheap to clone, given that `Variable` is an `Rc<String>`.
- Defines `Column` as a wrapper around `DatomsColumn` and
  `VariableColumn`. Everywhere we used to use `DatomsColumn` we now
  allow `Column`: particularly in constraints and projections.
- Broadens the definition of a table list in the intermediate
  "query-sql" representation to include a SQL UNION. A UNION is
  represented as a list of queries and an alias.
- Implements translation from a `ComputedTable` to the query-sql
  representation. In this commit we only project vars, not type tags.

Review comment: discuss bind_column_to_var for ValueTypeTag.
Review comment: implement From<Vec<T>> for ConsumableVec<T>.
2017-04-12 19:21:33 -07:00
Richard Newman
7948788936 Part 1: define ComputedTable.
Complex `or`s are translated to SQL as a subquery -- in particular, a
subquery that's a UNION. Conceptually, that subquery is a computed
table: `all_datoms` and `datoms` yield rows of e/a/v/tx, and each
computed table yields rows of variable bindings.

The table itself is a type, `ComputedTable`. Its `Union` case contains
everything a subquery needs: a `ConjoiningClauses` and a projection
list, which together allow us to build a SQL subquery, and a list of
variables that need type code extraction. (This is discussed further in
a later commit.)

Naturally we also need a way to refer to columns in a computed table.
We model this by a new enum case in `DatomsTable`, `Computed`, which
maintains an integer value that uniquely identifies a computed table.
2017-04-12 11:13:58 -07:00
Richard Newman
79ccd818f3 Pre: use ..Default approach for use_as_template and make_receptacle.
I decided this was more efficient (no temporary attributes and
mutability) and less confusing.
2017-04-12 11:12:49 -07:00
Richard Newman
98ac559894 Pre: allow initialization of a CC with an arbitrary counter value. Useful for testing. 2017-04-12 11:12:48 -07:00
Richard Newman
33fa1261b8 Pre: clone alias_counter into concretes.
This ensures that concrete CC clones don't have overlapping counts.
2017-04-12 11:11:56 -07:00
Richard Newman
b9f9b4ff58 Pre: make extracted_types pub so the projector and translator can use it. 2017-04-12 11:11:56 -07:00
Richard Newman
e984e02529 Pre: comment RcCounter. 2017-04-12 11:11:54 -07:00
Richard Newman
1636134a72 Algebrize simple or joins. (#304) r=nalexander 2017-04-07 12:47:02 -07:00
Richard Newman
e280811243 Part 7: use RcCounter to implement aliasing in ConjoiningClauses.
This allows us to share a counter between templates produced from a CC.
2017-04-07 12:46:34 -07:00
Richard Newman
2b61944f09 Part 6: track why an empty or-join failed. 2017-04-07 12:46:30 -07:00
Richard Newman
b693385495 Part 5: eliminate is_known_empty in favor of empty_because and an accessor. 2017-04-07 12:46:26 -07:00
Richard Newman
a07efc0a9e Part 4: look up attributes for bound variables when making type determinations. 2017-04-07 12:46:26 -07:00
Richard Newman
72977f52e4 Part 3: reinstate extracted type pruning.
When we started expanding and narrowing type sets, it became impossible
to conclusively know during pattern application whether a type was
known. We now figure that out at the end: if a variable has only a
single known type, we don't need to extract its type tag.
2017-04-07 12:46:26 -07:00
Richard Newman
0639c94468 Part 2: implement simple or. 2017-04-07 12:46:25 -07:00
Richard Newman
9df18e4286 Part 1: implement type narrowing and broadening. 2017-04-07 12:44:03 -07:00
Richard Newman
b117e2c463 Implement a cloneable shared counter. (#407) r=nalexander 2017-04-07 12:43:50 -07:00
Nick Alexander
5369f03464 Improve parsing of nested edn::ValueAndSpan streams. r=rnewman (#393)
* Pre: Expose more in edn.

* Pre: Make it easier to work with ValueAndSpan.

with_spans() is a temporary hack, needed only because I don't care to
parse the bootstrap assertions from text right now.

* Part 1a: Add `value_and_span` for parsing nested `edn::ValueAndSpan` instances.

I wasn't able to abstract over `edn::Value` and `edn::ValueAndSpan`;
there are multiple obstacles.  I chose to roll with
`edn::ValueAndSpan` since it exposes the additional span information
that we will want to form good error messages in the future.

* Part 1b: Add keyword_map() parsing an `edn::Value::Vector` into an `edn::Value::map`.

* Part 1c: Add `Log`/`.log(...)` for logging parser progress.

This is a terrible hack, but it sure helps to debug complicated nested
parsers.  I don't even know what a principled approach would look
like; since our parser combinators are so frequently expressed in
code, it's hard to imagine a data-driven interpreter that can help
debug things.

* Part 2: Use `value_and_span` apparatus in tx-parser/.

I break an abstraction boundary by returning a value column
`edn::ValueAndSpan` rather than just an `edn::Value`.  That is, the
transaction processor shouldn't care where the `edn::Value` it is
processing arose -- even we care to track that information we should
bake it into the `Entity` type.  We do this because we need to
dynamically parse the value column to support nested maps, and parsing
requires a full `edn::ValueAndSpan`.  Alternately, we could cheat and
fake the spans when parsing nested maps, but that's potentially
expensive.

* Part 3: Use `value_and_span` apparatus in query-parser/.

* Part 4: Use `value_and_span` apparatus in root crate.

* Review comment: Make Span and SpanPosition Copy.

* Review comment: nits.

* Review comment: Make `or` be `or_exactly`.

I baked the eof checking directly into the parser, rather than using
the skip and eof parsers.  I also took the time to restore some tests
that were mistakenly commented out.

* Review comment: Extract and use def_matches_* macros.

* Review comment: .map() as late as possible.
2017-04-06 10:06:28 -07:00
Richard Newman
a5023c70cb Use Rc for TypedValue, Variable, and query Ident keywords. (#395) r=nalexander
Part 1, core: use Rc for String and Keyword.
Part 2, query: use Rc for Variable.
Part 3, sql: use Rc for args in SQLiteQueryBuilder.
Part 4, query-algebrizer: use Rc.
Part 5, db: use Rc.
Part 6, query-parser: use Rc.
Part 7, query-projector: use Rc.
Part 8, query-translator: use Rc.
Part 9, top level: use Rc.
Part 10: intern Ident and IdentOrKeyword.
2017-04-02 21:38:36 -07:00
Richard Newman
8ae8466cf9 Make InternSet::intern accept Into<Rc<T>>. Add a test. r=nalexander 2017-03-31 09:59:58 -07:00
Richard Newman
92cdd72500 Diagnose and prepare simple and complex or joins. (#396) r=nalexander 2017-03-30 19:14:05 -07:00
Richard Newman
2b2b5cf696 Part 6: implement decision tree for processing simple alternation. 2017-03-30 19:13:40 -07:00
Richard Newman
74f188df9b Part 5b: rename also/instead to add_intersection and add_alternate. 2017-03-30 19:13:20 -07:00
Richard Newman
9e5c735460 Part 5: split cc.rs into a 'clauses' module.
mod.rs defines the module and ConjoiningClauses itself, complete with
methods to record facts and ask it questions.

pattern.rs, predicate.rs, resolve.rs, and or.rs include particular
functionality around accumulating certain kinds of patterns.

Only `or.rs` includes significant new code; the rest is just split.
2017-03-30 19:13:20 -07:00
Richard Newman
72eeedec74 Part 4: add OrJoin::is_fully_unified.
This allows us to tell if all the variables in a valid `or` join are to
be unified, which is necessary for simple joins.
2017-03-30 19:13:20 -07:00
Richard Newman
ce3c4f0dca Part 3: have table_for_places return a Result, not an Option. 2017-03-30 19:13:20 -07:00
Richard Newman
01ca0ae5c1 Part 2: add an EmptyBecause case for fulltext/non-string type mismatch. 2017-03-30 19:13:19 -07:00
Richard Newman
997df0b776 Part 1: introduce ColumnIntersection and ColumnAlternation.
This provides a limited form of OR and AND for column constraints, allowing
simple 'or-join' queries to be expressed on a single table alias.
2017-03-30 19:13:19 -07:00
Richard Newman
460fdac252 Pre: add Variable::from_valid_name, TypedValue::{typed_string,typed_ns_keyword}. 2017-03-30 19:13:19 -07:00
Richard Newman
439f3a2283 Pre: add some 'am I a pattern?' helper predicates to clause types. 2017-03-30 19:06:33 -07:00
Richard Newman
d2e6b767c6 Pre: add mentat_core::utils::{ResultEffect,OptionEffect}. 2017-03-30 19:06:28 -07:00
Richard Newman
95a5326e23 Pre: move EmptyBecause into types.rs. 2017-03-30 18:03:03 -07:00
Emily Toop
8e6f37e709 #260 Convert Schema into edn::Value (#384) r=nalexander, r=rnewman
* Part 1 - Create as_edn_value function.

* Do not include defaults inside output.
* Pretty-printed by default. Do we want to make that a flag?
* Includes simple test just to make sure it works.

* Part 2 - only include ident if available.

* Part 3 - Remove spacing and newlines as unnecessary.

* Update function to build edn::Value directly rather than parsing from string

* Update test to actually test the functionality.

* Address review comments ncalexan.

 * Rename `as_edn_value` to `to_edn_value`.
 * Move `db/src/values.rs` to `core/src/values.rs` so we can reference inside `core/src/ib.rs`.
 * Add `lazy-static` crate to core `Cargo.toml`
 * Expose `values` as a public module from `core`.
 * Update references to values in `db/src/bootstrap.rs` & `db/src/lib.rs`.
 * Add new static vars for `DB_FULLTEXT`, `DB_INDEX` & `DB_IS_COMPONENT`.
 * Use static vars exposed in `values` inside `to_edn_value`.
 * Remove `db/id` as key in attribute output and use `entid` as `db/ident` if no `ident` is found for that `entid`.
 * Update test to match new expected output.

* Add doc comment for function

* Address review comments ncalexan.

* Update function docstring to give clearer description of function.
* Do not all entid at all to output.
* Clean up code fetching ident (make it rustier).

* Address review comments rnewman.

* Extract out to new `to_edn_value` functions code for creating `edn::Value`\'s for `ValueType` and `Attribute`.
* Use `map()` to create schema edn value rather than a loop.

* Address review comments rnewman.

* pass cloned instance of ident to `Attribute::get_edn_value`.
* update `use` import for `edn`.
* remove unnecessary  call when using ident as key on `associate_ident`.

* Fixed bug whereby we didn't differentiate between `db.index/value` and `db.index/identity` when generating `edn::Value`

* Add extra assert at the end to ensure we get the same output when we convert the same schema to edn multiple times

* Move check for type of uniqueness to `match` statement.

* Also use `iter` instead of `into_iter` when iterating schema map.
2017-03-30 11:08:36 +01:00
Emily Toop
b24db01744 Add tests for validate_schema_map (#391) r=rnewman
* Add tests for `validate_schema_map`

* Update test to ensure we get the right error out
2017-03-30 11:07:49 +01:00
Richard Newman
8adb6d97fd Add validation for or-join. r=nalexander 2017-03-27 16:32:45 -07:00
Richard Newman
0d15381e11 Crudely parse or and or-join. (#388) r=nalexander 2017-03-27 16:32:01 -07:00
Nick Alexander
4b874deae1 Lookup refs, nested vector values, map notation. Fixes #180, fixes #183, fixes #284. (#382) r=rnewman
* Pre: Fix error in parser macros.

* Pre: Make test unwrapping more verbose.

* Pre: Make lookup refs be (lookup-ref a v) in the entity position.

This has the advantage of being explicit in all situations and
unambiguous at parse-time.  This choice agrees with the Clojure
implementation but not with Datomic.  Datomic treats [a v] as a lookup
ref, is ambiguous at parse-time, and is disambiguated in ways I do not
understand at transaction time.  We mooted making lookup refs [[a v]]
and outlawing nested value vectors in transactions, but after
implementing that approach I decided it was better to handle lookup
refs at parse time and therefore outlawing nested value vectors is not
necessary.

* Handle lookup refs in the entity and value columns. Fixes #183.

* Pre 0a: Use a stack instead of into_iter.

* Pre 0b: Dedent.

* Pre 0c: Handle `e` after `v`.

This allows to use the original `e` while handling `v`.

* Explode value lists for :db.cardinality/many attributes. Fixes #284.

* Parse and accept map notation. Fixes #180.

* Pre: Modernize add() and retract() into one add_or_retract().

* Pre: Add is_collection and is_atom to edn::Value.

* Pre: Differentiate atoms from lookup-refs in value position.

Initially, I expected to accept arbitrary edn::Value instances in the
value position, and to differentiate in the transactor.  However, the
implementation quickly became a two-stage parser, since we always
wanted to parse the resulting value position into some other known
thing using the tx-parser.  To save calls into the parser and to allow
the parser to move forward with a smaller API surface, I push as much
of this parsing as possible into the initial parse.

* Pre: Modernize entities().

* Pre: Quote edn::Value::Text in Display.

* Review comment: Add and use edn::Value::into_atom.

* Review comment: Use skip(eof()) throughout.

* Review comment: VecDeque instead of Vec.

* Review comment: Part 0: Rename TempId to TempIdHandle.

* Review comment: Part 1: Differentiate internal and external tempids.

This breaks an abstraction boundary by pushing the Internal/External
split up to the Entity level in tx/ and tx-parser/.  This just makes
it easier to explode Entity map notation instances into Entity
instances, taking an existing External tempid :db/id or generating a
new Internal tempid as appropriate.  To do this without breaking the
abstraction boundary would require adding flexibility to the
transaction processor: we'd need to be able to turn Entity instances
into some internal enum and handle the two cases independently.  It
wouldn't be too hard, but this reduces the combinatorial type
explosion.
2017-03-27 16:30:04 -07:00
Richard Newman
88df7b3b33 Correctly generate DISTINCT and LIMIT. (#386) r=nalexander 2017-03-22 14:02:00 -07:00
Richard Newman
5e971f3b22 Post: simplify type set narrowing. 2017-03-22 11:32:32 -07:00
Richard Newman
cb4ba9e68f Post: reorganize cc.rs. 2017-03-22 11:32:32 -07:00
Richard Newman
7024978517 Track ever-shrinking sets of types for variables, not a single type. (#381) r=nalexander 2017-03-22 11:30:16 -07:00
Richard Newman
97749833d0 Algebrize and translate numeric constraints. (#306) r=nalexander 2017-03-22 10:19:47 -07:00
Richard Newman
d83c8620cd Implement parsing of query predicates. (#380) r=nalexander 2017-03-22 10:19:44 -07:00
Richard Newman
1c4e30a906 Pre: switch to taking Patterns by move, not by reference, when algebrizing. 2017-03-22 10:14:15 -07:00
Richard Newman
f5aa6b2c2c Pre: add mentat_query_algebrizer::errors. 2017-03-22 10:14:15 -07:00
Richard Newman
d8d36140a9 Pre: add tests for CC constraint intersection.
Also add a failing test for #373.
2017-03-22 10:14:15 -07:00
Richard Newman
11a9a30d35 Pre: reformat query parser code. 2017-03-22 10:14:05 -07:00
Richard Newman
fe307f8b7a Pre: remove dead code in cc.rs. 2017-03-22 10:13:58 -07:00
Richard Newman
3d66cb5d0f Pre: move query algebrizer types to their own file. 2017-03-22 10:13:45 -07:00
Nick Alexander
2129514e86 Support transacting :db/fulltext true attributes. Fixes #189. (#375) r=rnewman
These tests are direct translations of the Clojure tests.
2017-03-21 13:12:10 -07:00
Emily Toop
55291b4d30 Check sqlite version. Fixes #366. (#376) r=rnewman
Checks whether current SQLite version is at least the minimum required version and panics if not.
2017-03-21 16:50:31 +00:00
Nick Alexander
15b4195a6e Schema alteration. Fixes #294 and #295. (#370) r=rnewman
* Pre: Don't retract :db/ident in test.

Datomic (and eventually Mentat) don't allow to retract :db/ident in
this way, so this runs afoul of future work to support mutating
metadata.

* Pre: s/VALUETYPE/VALUE_TYPE/.

This is consistent with the capitalization (which is "valueType") and
the other identifier.

* Pre: Remove some single quotes from error output.

* Part 1: Make materialized views be uniform [e a v value_type_tag].

This looks ahead to a time when we could support arbitrary
user-defined materialized views.  For now, the "idents" materialized
view is those datoms of the form [e :db/ident :namespaced/keyword] and
the "schema" materialized view is those datoms of the form [e a v]
where a is in a particular set of attributes that will become clear in
the following commits.

This change is not backwards compatible, so I'm removing the open
current (really, v2) test.  It'll be re-instated when we get to
https://github.com/mozilla/mentat/issues/194.

* Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output.

* Part 3: Separate `schema_to_mutate` from the `schema` used to interpret.

This is just to keep track of the expected changes during
bootstrapping.  I want bootstrap metadata mutations to flow through
the same code path as metadata mutations during regular transactions;
by differentiating the schema used for interpretation from the schema
that will be updated I expect to be able to apply bootstrap metadata
mutations to an empty schema and have things like materialized views
created (using the regular code paths).

This commit has been re-ordered for conceptual clarity, but it won't
compile because it references the metadata module.  It's possible to
make it compile -- the functionality is there in the schema module --
but it's not worth the rebasing effort until after review (and
possibly not even then, since we'll squash down to a single commit to
land).

* Part 2: Maintain entids separately from idents.

In order to support historical idents, we need to distinguish the
"current" map from entid -> ident from the "complete historical" map
ident -> entid.  This is what Datomic does; in Datomic, an ident is
never retracted (although it can be replaced).  This approach is an
important part of allowing multiple consumers to share a schema
fragment as it migrates forward.

This fixes a limitation of the Clojure implementation, which did not
handle historical idents across knowledge base close and re-open.

The "entids" materialized view is naturally a slice of the "datoms"
table.  The "idents" materialized view is a slice of the
"transactions" table.  I hope that representing in this way, and
casting the problem in this light, might generalize to future
materialized views.

* Pre: Add DiffSet.

* Part 4: Collect mutations to a `Schema`.

I haven't taken your review comment about consuming AttributeBuilder
during each fluent function.  If you read my response and still want
this, I'm happy to do it in review.

* Part 5: Handle :db/ident and :db.{install,alter}/attribute.

This "loops" the committed datoms out of the SQL store and back
through the metadata (schema, but in future also partition map)
processor.  The metadata processor updates the schema and produces a
report of what changed; that report is then used to update the SQL
store.  That update includes:
- the materialized views ("entids", "idents", and "schema");
- if needed, a subset of the datoms themselves (as flags change).

I've left a TODO for handling attribute retraction in the cases that
it makes sense.  I expect that to be straight-forward.

* Review comment: Rename DiffSet to AddRetractAlterSet.

Also adds a little more commentary and a simple test.

* Review comment: Use ToIdent trait.

* Review comment: partially revert "Part 2: Maintain entids separately from idents."

This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401.

Following our discussion, this removes the "entids" materialized
view.  The next commit will remove historical idents from the "idents"
materialized view.

* Post: Use custom Either rather than std::result::Result.

This is not necessary, but it was suggested that we might be paying an
overhead creating Err instances while using error_chain.  That seems
not to be the case, but this change shows that we don't actually use
any of the Result helper methods, so there's no reason to overload
Result.  This change might avoid some future confusion, so I'm going
to land it anyway.

Signed-off-by: Nick Alexander <nalexander@mozilla.com>

* Review comment: Don't preserve historical idents.

* Review comment: More prepared statements when updating materialized views.

* Post: Test altering :db/cardinality and :db/unique.

These tests fail due to a Datomic limitation, namely that the marker
flag :db.alter/attribute can only be asserted once for an attribute!
That is, [:db.part/db :db.alter/attribute :attribute] will only be
transacted at most once.  Since older versions of Datomic required the
:db.alter/attribute flag, I can only imagine they either never wrote
:db.alter/attribute to the store, or they handled it specially.  I'll
need to remove the marker flag system from Mentat in order to address
this fundamental limitation.

* Post: Remove some more single quotes from error output.

* Post: Add assert_transact! macro to unwrap safely.

I was finding it very difficult to track unwrapping errors while
making changes, due to an underlying Mac OS X symbolication issue that
makes running tests with RUST_BACKTRACE=1 so slow that they all time
out.

* Post: Don't expect or recognize :db.{install,alter}/attribute.

I had this all working... except we will never see a repeated
`[:db.part/db :db.alter/attribute :attribute]` assertion in the store!
That means my approach would let you alter an attribute at most one
time.  It's not worth hacking around this; it's better to just stop
expecting (and recognizing) the marker flags.  (We have all the data
to distinguish the various cases that we need without the marker
flags.)

This brings Mentat in line with the thrust of newer Datomic versions,
but isn't compatible with Datomic, because (if I understand correctly)
Datomic automatically adds :db.{install,alter}/attribute assertions to
transactions.

I haven't purged the corresponding :db/ident and schema fragments just
yet:
- we might want them back
- we might want them in order to upgrade v1 and v2 databases to the
  new on-disk layout we're fleshing out (v3?).

* Post: Don't make :db/unique :db.unique/* imply :db/index true.

This patch avoids a potential bug with the "schema" materialized view.
If :db/unique :db.unique/value implies :db/index true, then what
happens when you _retract_ :db.unique/value?  I think Datomic defines
this in some way, but I really want the "schema" materialized view to
be a slice of "datoms" and not have these sort of ambiguities and
persistent effects.  Therefore, to ensure that we don't retract a
schema characteristic and accidentally change more than we intended
to, this patch stops having any schema characteristic imply any other
schema characteristic(s).  To achieve that, I added an
Option<Unique::{Value,Identity}> type to Attribute; this helps with
this patch, and also looks ahead to when we allow to retract
:db/unique attributes.

* Post: Allow to retract :db/ident.

* Post: Include more details about invalid schema changes.

The tests use strings, so they hide the chained errors which do in
fact provide more detail.

* Review comment: Fix outdated comment.

* Review comment: s/_SET/_SQL_LIST/.

* Review comment: Use a sub-select for checking cardinality.

This might be faster in practice.

* Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 13:18:59 -07:00
Nick Alexander
8beea55e39 Collect tempids after upsert resolution. Fixes #299. (#365) r=rnewman
* Test collecting tempids after upsert resolution. Fixes #299.

I just didn't finish and expose the tempid collection when I
implemented upsert resolution.  Here it is!

* Review comment: Take ownership of temp_id_map; avoid contains_key().
2017-03-20 11:34:38 -07:00
Nick Alexander
1801db2a77 Convert EDN transaction tests to Rust code. Fixes #271. (#364) r=rnewman
* Pre: Order datoms deterministically in debug output.

This makes comparison much easier, and avoids a whole class of
difficult problems when introducing pattern matching with placeholder
values.

* Pre: Don't rewrite ?txN and ?msN in debug module into_edn() methods.

* Convert EDN transaction tests to Rust code. Fixes #271.

This implements
https://github.com/mozilla/mentat/issues/271#issuecomment-283125963.
I'm using the EDN pattern matching functionality
internally (extensively!), but specifically working around the tricky
edges we encountered.  This should let us implement tests quickly (and
hopefully legibly) while not requiring us to encode as much behaviour
into non-standard EDN notations.
2017-03-20 11:29:17 -07:00
Richard Newman
70e5759b5f Ensure that variable bindings are used when selecting a table. r=nalexander,etoop
For queries like

```edn
[:find ?x :where [?x _ "hello"]]
[:find [?v ...] :where [_ ?a ?v]]
```

we'll query `all_datoms` to handle fulltext strings, which is expensive.

If `?a` is bound, we can avoid this — resolve any keyword binding,
ensure that the value is an attribute, and use the appropriate table.
2017-03-14 13:47:22 +00:00
Richard Newman
dc6a7a4128 Add a VSCode test configuration for cargo test --all. 2017-03-13 16:56:23 +00:00
Emily Toop
fddc57d548 Use sqlite3_limit instead of hard-coded SQLITE_MAX_VARIABLE_NUMBER (#371) (#288) r=rnewman
* Part 1: added limits feature to rusqlite dependencies.
* Part 2: replace references to SQLITE_MAX_VARIABLE_NUMBER with sqlite3_limit.
* Move assertion check for correct number of variables in repeat_values to before call as this is where the variable is defined.
* Part 3: add tests
2017-03-13 09:39:19 -07:00
Richard Newman
6109a63249 Support input bindings in ConjoiningClauses. r=nalexander 2017-03-10 19:01:56 -08:00
Richard Newman
914902cf9e Ignore build output in VSCode. 2017-03-09 16:36:06 -08:00
Richard Newman
30804e033a Use rusqlite 0.10.1. (#367) r=nalexander 2017-03-09 12:30:36 -08:00
Richard Newman
39440895ab Add basic VSCode file ignore settings. 2017-03-09 11:31:04 -08:00
Richard Newman
77f7fab525 Tweak testing commands. 2017-03-09 09:00:46 -08:00
Richard Newman
bf38105fef (#362) Part 4: handle unknown attributes by expanding type codes. r=nalexander
Also, don't run any SQL at all if an algebrized query is known to return no results.
2017-03-08 17:44:27 -08:00
Richard Newman
b5867e9131 (#362) Part 3: implement querying against simple keywords. r=nalexander 2017-03-08 17:44:19 -08:00
Richard Newman
ce3a9bdf87 (#362) Part 2: use constrain_attribute. r=nalexander 2017-03-08 17:44:11 -08:00
Richard Newman
8935d6a8a5 (#362) Part 1: if a variable's type becomes known, don't extract it. r=nalexander
This is necessary because we process patterns sequentially; a later
pattern might tell us the type of a variable (e.g., by having a
constant attribute), at which point we can do less work.
2017-03-08 17:44:00 -08:00
Richard Newman
1961815acd Pre: add an interpose macro for SQL output. 2017-03-08 17:41:50 -08:00
Richard Newman
7bcf311db9 Pre: move SQLValueType to core, because it's so central.
Yes, this isn't tidy... but in order to be really tidy we'd need to
split up db into parts that don't depend on a particular SQLite library.
2017-03-08 17:41:49 -08:00
Richard Newman
e898df8842 Implement basic query limits. (#361) r=nalexander 2017-03-08 17:41:42 -08:00
Richard Newman
85f3b79f75 Support a limited set of '.'-prefixed non-keyword symbols. (#352) r=nalexander
This commit allows `.` and `...` to parse correctly as `PlainSymbol`.

Tests in edn, query-translator, and the top level have been added.
2017-03-06 15:01:19 -08:00
Richard Newman
70b112801c Implement projection and querying. (#353) r=nalexander
* Add a failing test for EDN parsing '…'.
* Expose a SQLValueType trait to get value_type_tag values out of a ValueType.
* Add accessors to FindSpec.
* Implement querying.
* Implement rudimentary projection.
* Export mentat_db::new_connection.
* Export symbols from mentat.
* Add rudimentary end-to-end query tests.
2017-03-06 14:40:10 -08:00
Nick Alexander
f86b24001f Add top-level Conn. Fixes #296. (#342) r=rnewman
* Add top-level `Conn`. Fixes #296.

This is a little different than the API rnewman and I originally
discussed in https://public.etherpad-mozilla.org/p/db-conn-thoughts.
A few notes:

- I was led to make a `Schema` instance the thing that is shared,
  rather than a `db::DB`.  It's possible that queries will want to
  know the current transaction at some point (to prevent races, or to
  query historical data), but that can be a future consideration.

- The generation number just allows for a cheap comparison.  I don't
  care to handle races to transact just yet; the long term plan might
  be to make embedding applications responsible for avoiding races, or
  we might handle queuing transactions and yielding report futures in
  Mentat itself.

- The sharing of the partition maps is a little more subtle than
  expected.  Partition maps are volatile: a successful Mentat
  transaction always advances the :db.part/tx partition, so it's not
  worth passing references around.  This means that consumers must
  clone in order to maintain just a single clone per transaction.

Clean some cruft.

* Review comments.
2017-03-03 15:03:59 -08:00
Richard Newman
ecf56395b9 Add discussion of storage difficulties. r=nalexander (#344)
* Add discussion of storage difficulties.

* Replace mention of MVP with discussion of initial requirements.
2017-02-27 16:19:23 -08:00
Richard Newman
d7f323d15d Wire in the start of querying and error_chain at top level. (#349) r=nalexander 2017-02-27 16:17:25 -08:00
Richard Newman
48312e1ff0 Rebased conversion of mentat_query_parser to use error-chain. r=nalexander
This is a tiny bit simpler and more consistent.
2017-02-27 16:16:54 -08:00
Richard Newman
b2f22952c1 Convert mentat_sql to use error-chain. r=nalexander 2017-02-27 16:16:49 -08:00
Nick Alexander
dcd9bcb1ce Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341)
* Pre: Drop unneeded tx0 from search results.

* Pre: Don't require a schema in some of the DB code.

The idea is to separate the transaction applying code, which is
schema-aware, from the concrete storage code, which is just concerned
with getting bits onto disk.

* Pre: Only reference Schema, not DB, in debug module.

This is part of a larger separation of the volatile PartitionMap,
which is modified every transaction, from the stable Schema, which is
infrequently modified.

* Pre: Fix indentation.

* Extract part of DB to new SchemaTypeChecking trait.

* Extract part of DB to new PartitionMapping trait.

* Pre: Don't expect :db.part/tx partition to advance when tx fails.

This fails right now, because we allocate tx IDs even when we shouldn't.

* Sketch a db interface without DB.

* Add ValueParseError; use error-chain in tx-parser.

This can be simplified when
https://github.com/Marwes/combine/issues/86 makes it to a published
release, but this unblocks us for now.  This converts the `combine`
error type `ParseError<&'a [edn::Value]>` to a type with owned
`Vec<edn::Value>` collections, re-using `edn::Value::Vector` for
making them `Display`.

* Pre: Accept Borrow<Schema> instead of just &Schema in debug module.

This makes it easy to use Rc<Schema> or Arc<Schema> without inserting
&* sigils throughout the code.

* Use error-chain in query-parser.

There are a few things to point out here:

- the fine grained error types have been flattened into one crate-wide
  error type; it's pretty easy to regain the granularity as needed.

- edn::ParseError is automatically lifted to
  mentat_query_parser::errors::Error;

- we use mentat_parser_utils::ValueParser to maintain parsing error
  information from `combine`.

* Patch up top-level.

* Review comment: Only `borrow()` once.
2017-02-24 15:33:48 -08:00
Richard Newman
5e3cdd1fc2 Implement query-translator. (#301) r=nalexander 2017-02-23 18:39:49 -08:00
Richard Newman
91c75f26c8 Expand query algebrizer. r=nalexander 2017-02-23 18:39:49 -08:00
Richard Newman
76f51015d9 Support accumulating TypedValue instances into a SQL query. (#339) r=nalexander
These expand into a collection of named variables that should be
passed via bind parameters when the query is executed.

Bind parameters are now only named.
2017-02-23 18:39:43 -08:00
Richard Newman
b0120aa446 Pre: fix query/Cargo.toml indenting. 2017-02-23 18:31:57 -08:00
Richard Newman
14972fa6d7 Pre: use Option::cloned() instead of a cloning closure. 2017-02-23 18:31:54 -08:00
Joe Walker
40bca2df6d Remove most uses of use foo::* 2017-02-23 14:09:54 +00:00
Joe Walker
60b2e6f885 Improve Debug output for ConjoiningClauses; r=rnewman
Fixes #317, and also removes the '::' exploration of rust style
2017-02-23 14:05:56 +00:00
Victor Porof
7fc2a22d68 Implement a basic edn matcher, r=ncalexan (#271) (#338)
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-23 09:11:34 +01:00
Victor Porof
bf707acbc3 Lint for the clippy gods in the edn crate (#340)
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-22 18:11:05 +01:00
Victor Porof
0d3b8e4b29 Avoid code duplication for common Value trait implementations, r=ncalexan
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-22 08:36:45 +01:00
Victor Porof
1b26e23d02 Implement edn pretty printing using pretty.rs. Fixes #195. (#245)
* Implement pretty printing

Signed-off-by: Victor Porof <victor.porof@gmail.com>

* Rewrite pretty printing.

This does a few things.  First, it use pretty.rs directly, without the
layer of macro obfuscation.  The code is significantly simpler as a
result.

Second, it tightens the layout, using pretty.rs to group nested
layouts that fit on a single line.  This is Clojure's EDN style, more
or less.

Third, it drops "special format" support for queries.  This wasn't
completely implemented; if we want it, we can newtype
Query(edn::Value) and figure out how to really implement this idea.

* Rename to reflect functionality.

* Make write interface more Rust-like.

There isn't a clear standard in the stdlib, but a function that takes
ownership of a writer and then returns it back is definitely not
Rust-like.  That's what a (mutable) reference is for.

* Review comment: Use as_ref to avoid cloning strings.

* Post: Fix tests to use `without_spans()`.
2017-02-21 11:48:08 -08:00
Richard Newman
7476d0c0c8 More Rust notes. 2017-02-21 10:45:21 -08:00
Richard Newman
e18a0e9fdc Discuss Rust contributions. 2017-02-21 10:43:36 -08:00
Richard Newman
6fa907d2df Simplify .travis.yml to use cargo test --all. 2017-02-20 11:04:18 -08:00
Richard Newman
a10f68fdb7 Mark every project as being part of the workspace. r=nalexander
This allows `cargo test --all` to work.
2017-02-20 11:04:08 -08:00
Richard Newman
42ae26ab46 Add new stuff to Travis. 2017-02-17 17:54:35 -08:00
Richard Newman
9ecd02ef95 Begin serializing queries to SQL. r=nalexander 2017-02-17 17:54:07 -08:00
Richard Newman
a9cd9b1e87 Export symbols and string helpers from mentat_query_algebrizer. 2017-02-17 17:54:07 -08:00
Richard Newman
f890995202 Add a rudimentary SQL builder, based on parts of Diesel. (#273) r=nalexander
https://github.com/diesel-rs/diesel/
2017-02-17 17:53:50 -08:00
Jordan Santell
bc2b2ec4c8 Change to_namespaced_keyword(s) to return a Result rather than Option to (#333)
reduce error handling throughout db code. Fixes #331. r=nalexander
2017-02-17 16:10:34 -08:00
Jordan Santell
6f67f8563b TypedValue::Keyword now wraps a NamespacedKeyword rather than a String. Fixes #203. r=nalexander" (#329) 2017-02-17 14:07:57 -08:00
Jordan Santell
a59f9583ac Store Idents as NamespacedKeywords, rather than Strings. Fixes #291. (#300)
r=ncalexander
2017-02-17 13:55:36 -08:00
Jordan Santell
ec2bbb8e83 Ensure minimum rustc version in a build script. r=nalexander (#326)
Printing out failure to meet rustc version helps users during
setup with a helpful message if using an older rustc.
Rust version checking from http://stackoverflow.com/a/36607492.
2017-02-17 12:04:45 -08:00
Victor Porof
896d7f8f88 Add a span component to edn::Value, r=ncalexan
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-17 18:31:26 +01:00
Joe Walker
d9b699b588 Fix the authors entry in Cargo.toml (#322) 2017-02-17 08:03:48 +01:00
Joe Walker
89949fb451 Update README for edn; r=me 2017-02-16 18:32:36 +00:00
Richard Newman
5af7082165 Partly flesh out query algebrizer. (#243) r=nalexander 2017-02-15 16:10:59 -08:00
Richard Newman
f36a78e61e Test mentat_core and mentat_query_algebrizer on Travis. 2017-02-15 16:01:22 -08:00
Richard Newman
42f03f55a2 Stub out query algebrizer. 2017-02-15 16:01:22 -08:00
Richard Newman
b165a0b2ad Implement Schema::attribute_for_ident. 2017-02-15 16:01:22 -08:00
Nick Alexander
16e9740d8a Implement upsert resolution algorithm. (#186, #283). r=rnewman, f=jsantell
* Pre: Implement batch [a v] pair lookup.

* Pre: Add InternSet for sharing ref-counted handles to large values.

* Pre: Derive more for Entity.

* Pre: Return DB from creating; return TxReport from transact.

I explicitly am not supporting opening existing databases yet, let
alone upgrading databases from earlier versions.  That can follow fast
once basic transactions are supported.

* Pre: Parse string temporary ID entities; remove ValueOrLookupRef.

This adds TempId entities, but we can't disambiguate String temporary
IDs from values without the use of the schema, so there's no new value
branch.  Similarly, we can't disambiguate lookup-ref values from two
element list values without a schema, so we remove this entirely.
We'll handle the ambiguity later in the transactor.

* Persist partitions to SQL store; allocate transaction ID. (#186)

* Post: Test upserting with vectors.

This converts an existing test to EDN:
84a80f40f5/test/datomish/db_test.cljc (L193).

* Implement tempid upsert resolution algorithm. (#184)

* Post: Separate Tx out of DB.

This is very preliminary, since we don't have a real connection type
to manage transactions and their metadata yet.

* Post: Comment on implementation choices in the transactor.

* Review comment: Put long use lists on separate lines.

* Review comment: Accept String: Borrow<S> instead of just String.

* Review comment: Address nits.
2017-02-14 16:50:40 -08:00
Richard Newman
bfb62302cb Add docstrings for Schema methods. 2017-02-13 19:39:47 -08:00
Richard Newman
a87e5a3ec7 Move Schema from mentat_db to mentat_core, improve API. (#290)
* Move Schema from mentat_db to mentat_core.
* Define SchemaMap in terms of Entid, not i64.
* Add Schema::{is_attribute,identifies_attribute}.
* Add pointer to #291.
* Don't pass around 64-bit pointers to 64-bit integers.
2017-02-13 19:20:20 -08:00
Richard Newman
2e303f4837 Stub out mentat::q_once. (#289) r=nalexander
* Leave a pointer to issue 288.
* Re-export mentat_db::types::DB from mentat_db.
* Parse EDN strings in the query parser.
* Export 'public' API from mentat_query_parser's top level.
* Stub out mentat::q_once.
2017-02-13 10:30:02 -08:00
Jordan Santell
4e81733eed Change expecteddatoms and expectedtransaction to their kebab-case counterparts, for valid EDN style. Fixes #270. r=nalexander (#287) 2017-02-11 12:06:09 -08:00
Jordan Santell
4f5c94891a Add octal, hexadecimal, and arbitrary base integers to the EDN parser. Fixes #277. r=rnewman (#286) 2017-02-10 16:03:35 -08:00
Jordan Santell
6ce5d526a3 Store a bitfield in temporary search tables and expand to bit flags in the datoms table to investigate performance difference. Fixes #226. r=nalexander (#242) 2017-02-10 12:03:49 -08:00
Joe Walker
f591c90738 Use mentat-parser-utils in tx-parser. Fixes #235; r=rnewman,victorporof
Move macros query-parser/…/parser_utils.rs → parser-utils/…/query.rs

Signed-off-by: Joe Walker <jwalker@mozilla.com>
2017-02-10 18:30:03 +00:00
Jordan Santell
1deed24f42 Remove usage of global imports in db module. Fixes #251. r=nalexander (#253) 2017-02-10 09:09:13 -08:00
Victor Porof
49f91b05b0 Expose EDN into_ methods that consume the edn value, r=jwalker. Fixes 256
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-10 14:58:38 +01:00
Joe Walker
cb40a0f581 Expose EDN as_ methods that return copied values, not references; r=ncalexan,jwalker 2017-02-10 12:09:13 +00:00
Richard Newman
8e2359d3ab Implement TypedValue::is_congruent_with and ::matches_type. r=jsantell 2017-02-09 18:45:50 -08:00
Richard Newman
b56b7c2a3f ValueType is Copy. r=jsantell 2017-02-09 18:45:46 -08:00
Richard Newman
fc73bfce75 Implement NonIntegerConstant::into_typed_value. r=jsantell 2017-02-09 18:45:46 -08:00
Richard Newman
1aa33423dc Derive PartialOrd and Ord for Variable. r=jsantell 2017-02-09 18:45:36 -08:00
Richard Newman
1122960462 Re-export NamespacedKeyword and PlainSymbol out of query. r=jsantell 2017-02-09 18:45:30 -08:00
Victor Porof
9be487ca7d Run pragmas to configure SQLite store and connection when we open a DB, r=ncalexan (#275)
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-09 21:23:04 +01:00
Richard Newman
4366f6d61f Expose EDN as_ methods that return copied values, not references. 2017-02-09 10:21:35 -08:00
Victor Porof
42580539b8 Properly handle whitespace for Infinity and NaN, r=rnewman (#246)
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-09 18:13:44 +01:00
Richard Newman
7db74953d6 mentat_core doesn't need rusqlite. r=nalexander 2017-02-08 17:34:17 -08:00
Jordan Santell
21f7bdf493 Consolidate Entity::{Add, Retract} to Entity::AddOrRetract. Fixes #255. r=nalexander (#265) 2017-02-08 15:45:09 -08:00
Jordan Santell
688a644bd9 Ensure :db/index true for :db/unique _. Fixes #254. r=nalexander (#267) 2017-02-08 15:26:45 -08:00
Jordan Santell
9fcf9f3318 Remove Entity::{RetractEntity, RetractAttribute} for now. Fixes #257. r=nalexander (#266) 2017-02-08 15:19:47 -08:00
Richard Newman
c111d4daff Move TypedValue into mentat_core. r=jsantell,nalexander 2017-02-08 14:20:10 -08:00
Nick Alexander
afafcd64a0 [tx] Start implementing bulk SQL insertion algorithms (#214). r=rnewman,jsantell
* Pre: Add some value conversion tests.

This is follow-up to earlier work.  Turn TypedValue::Keyword into
edn::Value::NamespacedKeyword.  Don't take a reference to
value_type_tag.

* Pre: Add repeat_values.

Requires itertools, so this commit is not stand-alone.

* Pre: Expose the first transaction ID as bootstrap::TX0.

This is handy for testing.

* Pre: Improve debug module.

* Pre: Bump rusqlite version for https://github.com/jgallagher/rusqlite/issues/211.

* Pre: Use itertools.

* Start implementing bulk SQL insertion algorithms. (#214)

This is slightly simpler re-expression of the existing Clojure
implementation.

* Post: Start generic data-driven transaction testing. (#188)

* Review comment: `use ::{SYMBOL}` instead of `use {SYMBOL}`.

* Review comment: Prefer bindings_per_statement to values_per_statement.
2017-02-08 14:04:32 -08:00
Victor Porof
c585715224 Don't depend on num and ordered-float in the db and query crates, r=ncalexan (#223)
Signed-off-by: Victor Porof <victor.porof@gmail.com>
2017-02-08 12:19:16 +01:00
Victor Porof
4d83aafa2a Ensure printing/parsing edn strings is isomorphic
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-04 08:47:48 +01:00
Victor Porof
a627f532f0 Relax whitespace rules for edn vectors, lists, sets and maps
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-04 08:45:31 +01:00
Victor Porof
419db388da Relax whitespace rules for Infinity and NaN
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-04 08:45:02 +01:00
Richard Newman
00c99196a2 Move db::type::{ValueType,Attribute} into a mentat_core crate. 2017-02-03 17:01:30 -08:00
Jordan Santell
0b20d7691b Parse and display EDN values for NaN, +Infinity and -Infinity. Fixes #232 (#238) r=victorporof 2017-02-03 10:14:23 -08:00
Victor Porof
c038c11017 Consolidate edn peg rules to better parse keywords and symbols, r=ncalexan. Fixes #219 2017-02-03 09:08:24 +01:00
Victor Porof
9ee0ac8e00 Unify and generalize keywords and symbols parsing
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-03 09:06:42 +01:00
Victor Porof
72da5722ae Update rustpeg to latest version and follow new syntax and formatting rules
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-03 09:06:42 +01:00
Victor Porof
611fbe2eef Properly print null edn values as "nil", to allow for isomorphic write/parse
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-03 09:06:42 +01:00
Richard Newman
5b770a54cd Parse basic :find and :where clauses. (#211) r=nalexander
* Make Variable::from_symbol public.
* Implement basic parsing of queries.
* Use pinned dependencies the hard way to fix Travis.
* Bump ordered-float dependency to 0.4.0.
* Error coercions to use ?, and finishing the find interface.
2017-02-02 18:32:00 -08:00
Richard Newman
cd5f0d642c Doc comment for ResultParser. 2017-02-02 15:23:24 -08:00
Victor Porof
707ce36236 Don't use single-character string constants in the is_backward function
See https://github.com/Manishearth/rust-clippy/wiki#single_char_pattern for further info

Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 19:55:29 +01:00
Richard Newman
5d74f1ee94 Add utilities for defining parsers. (#218) r=vporof
satisfy_unwrap and ResultParser go into mentat_parser_utils.
2017-02-02 10:25:05 -08:00
Victor Porof
816a85f0a3 Write more tests, handle more types for printing and a few other code cleanups for edn types, r=jwalker (#233) 2017-02-02 17:23:07 +01:00
Victor Porof
9a5ece8c89 Handle more edn::Value types for printing, precursor for #195
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 17:22:58 +01:00
Victor Porof
2ecda0a2bd Avoid needless reborrows and simplify Ord implementation for edn::Value 2017-02-02 17:22:58 +01:00
Victor Porof
a685d6c541 Move edn test functions into a submodule
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 17:22:58 +01:00
Victor Porof
cc56cec11a Add note about linked lists data type choice for edn::Value 2017-02-02 17:22:58 +01:00
Victor Porof
00048d1955 Remove edn::Pair struct since it's not used anywhere
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 17:22:58 +01:00
Richard Newman
fcdf759399 Rename parser_utils to mentat_parser_utils, clean up imports. (#234) r=vporof 2017-02-02 08:18:04 -08:00
Victor Porof
f3f353661f Several small idiomatic changes to the edn crate, r=rnewman,jsantell 2017-02-02 11:00:35 +01:00
Victor Porof
4e9e8ed837 Use idiomatic enumerate method on interators instead of iterating over indices
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:59:03 +01:00
Victor Porof
4b2c7870c0 Wrap code indicated by the "_" in documentation as suggested by rustdoc best practices
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:59:03 +01:00
Victor Porof
17bc85fe27 Remove return statements from edn parser tests
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:59:03 +01:00
Victor Porof
25474980b1 Add rustdoc comments for to_symbol and to_keyword functions
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:59:03 +01:00
Victor Porof
8f68f68378 Use idiomatic map_or_else calls to Option<T> instead of double returns
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:59:03 +01:00
Victor Porof
85da91a0ab Add helper functions constructing OrderedFloat and BigInt to edn crate, r=ncalexan,rnewman. Fixes #198
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:58:08 +01:00
Victor Porof
93053a4297 Add the parser_utils crate to .travis.yml
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:58:07 +01:00
Victor Porof
ba1896b684 Extract assert_parses_to into a parser utility crate, r=rnewman. Fixes #200
Signed-off-by: Victor Porof <vporof@mozilla.com>
2017-02-02 10:17:08 +01:00
Richard Newman
a9929249eb Use Into<Option<>> trick for to_keyword and to_symbol. 2017-02-01 17:46:53 -08:00
Richard Newman
592dec7241 Implement a FromValue trait for SrcVar and Variable. (#227) r=nalexander 2017-02-01 15:05:14 -08:00
Richard Newman
0b3387f8b9 Minor EDN cleanup. (#217) r=jsantell
* to_reverse -> to_reversed.
* Add PlainSymbol::plain_name for examining $x and ?y.
* Fix comment.
2017-02-01 14:34:51 -08:00
Richard Newman
f1a55c9f12 Move query-parser test functions into a submodule. 2017-02-01 10:44:53 -08:00
Richard Newman
932a42866c Fix comments in EDN crate. 2017-01-30 17:45:02 -08:00
Jordan Santell
00bb3e6d4a Merge pull request #212 from mozilla/202
Add NamespacedKeyword::is_reverse and NamespacedKeyword::to_reverse f… r=nalexander
2017-01-30 17:22:44 -08:00
Jordan Santell
0a21b5dca4 Add NamespacedKeyword::{is_forward, is_backward, to_reverse} for
inspection in NamespacedKewyord for usage by query and transaction parsers. Fixes #202
2017-01-30 14:39:53 -08:00
Jordan Santell
359d356dd9 Merge pull request #213 from mozilla/199
Add Into<String> to symbol::* constructors. Fixes #199
2017-01-30 13:35:12 -08:00
Jordan Santell
0b87978830 Merge pull request #208 from mozilla/197
Add is_ methods to edn::Value types and add tests. Fixes #197. r=nalexander
2017-01-30 09:37:26 -08:00
Jordan Santell
d116fd7bff Add is_$type and as_$type methods to edn::Value types and add tests. Fixes #197 2017-01-30 09:35:33 -08:00
Jordan Santell
18279fdd3c Add Into<String> to symbol::* constructors. Fixes #199 2017-01-28 22:53:40 -08:00
Richard Newman
c6fa14c0c8 Rudimentary printing of EDN values. (#209) r=jsantell
* Add a little From helper for edn::parse::ParseError. Not used yet.

* Ignore more things.

* Partly implement Display for edn::Value.
2017-01-28 14:18:17 -08:00
Nick Alexander
506c83c160 Implement basic logging infrastructure. (#205) r=nalexander,victorporof
Signed-off-by: Paul Lange <palango@gmx.de>
2017-01-26 10:43:48 -08:00
Nick Alexander
81af295948 Start installing SQL schema. (#171) r=rnewman
* Start installing the SQLite store and bootstrapping the datom store.

* Review comment: Decomplect V2_IDENTS.

* Review comment: Decomplect V2_PARTS.

* Review comment: Pre: Expose Clojure's merge on Value instances.

* Review comment: Decomplect V2_SYMBOLIC_SCHEMA.

* Review comment: Decomplect V1_STATEMENTS.

* Review comment: Prefer ? to try!.

* Review comment: Fix typos; format; add TODOs.

* Review comment: Assert that Mentat `Schema` is valid upon creation.

* Review comment: Improve conversion to and from SQL values.

This patch factors the fundamental SQL conversion maps
between (rusqlite::Value, value_type_tag) and (edn::Value, ValueType)
through a new Mentat TypedValue.  (A future patch might rename this
fundamental type mentat::Value.)

To make certain conversion functions infallible, I removed
placeholders for :db.type/{instant,uuid,uri}.  (We could panic
instead, but there's no need to do that right now.)

* Review comment: Always uses bundled SQLite in rusqlite.

This avoids (runtime) failures in Travis CI due to old SQLite
versions.  See 432966ac77.

* Review comment: Move semantics in `from_sql_value_pair`.

* Review comment: DB_EXCISE_BEFORE_T instead of ...BEFORET (no underscore).

* Review comment: Move overview notes to the Wiki.
2017-01-25 16:13:56 -08:00
Richard Newman
2592506288 Implement parsing of simple :find expressions. (#196) r=nalexander
* Test the mentat_query directory on Travis.

* Export common types from edn.

This allows you to write

  use edn::{PlainSymbol,Keyword};

instead of

  use edn:🔣:{PlainSymbol,Keyword};

* Add an edn::Value::is_keyword predicate.

* Clean up query, preparing for query-parser.

* Make EDN keywords and symbols take Into<String> arguments.

* Implement parsing of simple :find lists.

* Rustfmt query-parser. Split find and query.

* Review comment: values_to_variables now returns a NotAVariableError on failure.

* Review comment: rename gimme to to_parsed_value.

* Review comment: add comments.
2017-01-25 14:06:19 -08:00
Richard Newman
b77d124152 Ignore SQLite wal and shm files. 2017-01-21 19:11:02 -08:00
Nick Alexander
ab041291fb edn: Bound values by optional whitespace; treat comma as whitespace. 2017-01-18 08:34:27 -08:00
Nick Alexander
247035cc9b edn: Allow comments.
EDN supports only one type of comment: initiated by ; and lasting
until the end of the current line or the end of the input stream.
2017-01-18 08:34:27 -08:00
Brian Grinstead
71a30fe69f Add beginning of web server for the serve subcommand (#159) 2017-01-13 11:46:00 -08:00
Nick Alexander
b11b9b909c Add tx{-parser} crates; start parsing transactions. (#164) r=rnewman
This depends on edn and uses the combine parser combinator library.
2017-01-12 16:08:29 -08:00
Richard Newman
a152e60040 Read EDN keywords and symbols as rich types. Fixes #154. r=nalexander 2017-01-12 09:09:48 -08:00
Joe Walker
c4735119c4 Implement a basic EDN parser. (#149) r=rnewman,bgrins,nalexander
The parser mostly works and has a decent test suite. It parses all the
queries issued by the Tofino UAS, with some caveats. Known flaws:

* No support for tagged elements, comments, discarded elements or "'".
* Incomplete support for escaped characters in strings and the range of
  characters that are allowed in keywords and symbols.
* Possible whitespace handling problems.
2017-01-11 13:03:04 -08:00
Richard Newman
370742890c Test more things on Travis. (#161) r=bgrins 2017-01-11 11:09:48 -08:00
Richard Newman
71960de636 Add test databases.
* v1empty.db: an empty v1 DB, which is the original on-disk format.
* v2empty.db: an empty v2 DB. This includes bootstrapped schema metadata attributes.
* v1tofino.db: a v1 DB that was created by Tofino.
2017-01-10 12:09:00 -08:00
Brian Grinstead
cd9517e5fd Run cargo fmt. r=me 2017-01-10 10:54:37 -08:00
Brian Grinstead
6d10774fc8 Move the bin to src and take on clap dependency for command line arg parsing. Fixes #150. r=rnewman 2017-01-10 10:53:34 -08:00
Richard Newman
daddfd3e0f Add query sub-crate, implementing more of the beginnings of the query language. 2017-01-09 12:31:57 -08:00
Richard Newman
476f04e27b Implement a rudimentary Keyword struct and the beginnings of ident/entid. 2017-01-09 12:31:56 -08:00
Richard Newman
22ebcd65f3 Rename everything to Project Mentat. r=bgrins 2017-01-09 09:34:10 -08:00
Richard Newman
a54cd9958c Fix Travis. 2017-01-06 17:31:26 -08:00
Richard Newman
b9c439bd00 Use underscores for crate names. 2017-01-06 17:31:26 -08:00
Richard Newman
a665926fe6 Rename to Project Mentat (query-parser). 2017-01-06 17:20:21 -08:00
Richard Newman
84f468ce41 Rename to Project Mentat (tests). 2017-01-06 17:20:20 -08:00
Richard Newman
3af0d479aa Rename to Project Mentat (cli). 2017-01-06 17:20:20 -08:00
Richard Newman
7a4c75ba44 Rename to Project Mentat (src). 2017-01-06 17:20:20 -08:00
Richard Newman
7f3347981c Rename to Project Mentat (docs). 2017-01-06 17:20:20 -08:00
Richard Newman
76b5a5e43b Rename to Project Mentat (build). 2017-01-06 17:20:20 -08:00
Richard Newman
8f9c532d8d Remove old JS code; we can bring it back if we want it. 2017-01-06 17:20:20 -08:00
Brian Grinstead
981dc6ade9 Ignore .DS_Store files. r=me 2017-01-06 16:07:33 -06:00
Brian Grinstead
8a52015422 Take on rusqlite dependency. Fixes #148. r=rnewman 2017-01-06 10:24:04 -06:00
Richard Newman
fa3c99f550 Add a back-pointer to master, because GitHub shows the rust branch by default. 2016-12-21 16:59:26 -08:00
Brian Grinstead
4700eace15 Update README with extra details about using cargo 2016-12-16 18:45:44 -08:00
Brian Grinstead
9b8257a725 Create a new crate for the query parser. Fixes #138. r=rnewman
Starting to work out the project layout for sub-crates.  The crate inside query-parser/ is "datomish-query-parser" and the core code in src/ depends on it.
2016-12-16 18:43:47 -08:00
Brian Grinstead
38e8c49223 Move existing code into js/ subfolder (#137) 2016-12-16 14:31:02 -08:00
Brian Grinstead
5ac47fd6ff Add a stub CLI tool and run tests on it. Fixes #136. r=rnewman 2016-12-16 14:26:10 -08:00
Brian Grinstead
f7c97e776c Merge pull request #135 from mozilla/bgrins-patch-1-1
Include instructions for building and testing with cargo
2016-12-16 12:53:54 -08:00
Brian Grinstead
4bebb3cbe4 Include instructions for building and testing with cargo 2016-12-16 11:57:18 -08:00
Brian Grinstead
973c32ff77 Update test boilerplate for running on travis (#134). r=rnewman
* Include a local and external test.
* Add license blocks.
2016-12-16 11:50:08 -08:00
Richard Newman
789eb59c9a Alter Travis config to build Rust. 2016-12-16 10:45:58 -08:00
Richard Newman
f8682a65fa Initial Rust commit.
If you want to go fast, go alone. If you want to go far, go together.
2016-12-16 10:39:08 -08:00
Richard Newman
cbd278dd7e Remove Clojure and JS application code. 2016-12-16 10:32:23 -08:00
Richard Newman
44d50c9005 Update README for oxidation. 2016-12-16 10:31:06 -08:00
Richard Newman
73f179c887 Strip out Clojure tests and release directories. 2016-12-16 10:30:57 -08:00
2410 changed files with 338603 additions and 11671 deletions

View file

@ -1,27 +0,0 @@
{
"env": {
"production": {
"presets": ["react", "react-optimize"]
},
"development": {
"presets": ["react"]
},
"test": {
"presets": ["react"]
}
},
"only": [
"test/js/**"
],
"plugins": [
"transform-es2015-destructuring",
"transform-es2015-parameters",
"transform-es2015-modules-commonjs",
"transform-async-to-generator",
"transform-object-rest-spread",
"transform-class-properties",
"transform-runtime"
],
"sourceMaps": "inline",
"retainLines": true
}

3
.cargo/config Normal file
View file

@ -0,0 +1,3 @@
[alias]
cli = ["run", "--release", "-p", "mentat_cli"]
debugcli = ["run", "-p", "mentat_cli"]

3
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,3 @@
liberapay: svartalf
patreon: svartalf
custom: ["https://svartalf.info/donate/", "https://www.buymeacoffee.com/svartalf"]

11
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "cargo" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "daily"

20
.github/workflows/audit.yml vendored Normal file
View file

@ -0,0 +1,20 @@
name: Security audit
on:
schedule:
- cron: '0 0 1 * *'
push:
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
pull_request:
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/audit-check@issue-104
with:
token: ${{ secrets.GITHUB_TOKEN }}

13
.github/workflows/clippy-ng.yml vendored Normal file
View file

@ -0,0 +1,13 @@
on: [push, pull_request]
name: Clippy (new version test, don't use it!)
jobs:
clippy_check_ng:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
components: clippy
override: true
- uses: actions-rs/clippy@master

16
.github/workflows/clippy_check.yml vendored Normal file
View file

@ -0,0 +1,16 @@
on: [push, pull_request]
name: Clippy check
jobs:
clippy_check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
components: clippy
override: true
- uses: actions-rs/clippy-check@v1
with:
args: --all-targets --all-features -- -D warnings
token: ${{ secrets.GITHUB_TOKEN }}

28
.github/workflows/cross_compile.yml vendored Normal file
View file

@ -0,0 +1,28 @@
# We could use `@actions-rs/cargo` Action ability to automatically install `cross` tool
# in order to compile our application for some unusual targets.
on: [push, pull_request]
name: Cross-compile
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
matrix:
target:
- armv7-unknown-linux-gnueabihf
- powerpc64-unknown-linux-gnu
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
target: ${{ matrix.target }}
override: true
- uses: actions-rs/cargo@v1
with:
use-cross: true
command: build
args: --release --target=${{ matrix.target }}

66
.github/workflows/grcov.yml vendored Normal file
View file

@ -0,0 +1,66 @@
on: [push, pull_request]
name: Code coverage with grcov
jobs:
grcov:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-latest
- macOS-latest
# - windows-latest
steps:
- uses: actions/checkout@v2
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
profile: minimal
- name: Execute tests
uses: actions-rs/cargo@v1
with:
command: test
args: --all
env:
CARGO_INCREMENTAL: 0
RUSTFLAGS: "-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Cpanic=abort -Zpanic_abort_tests"
# Note that `actions-rs/grcov` Action can install `grcov` too,
# but can't use faster installation methods yet.
# As a temporary experiment `actions-rs/install` Action plugged in here.
# Consider **NOT** to copy that into your workflow,
# but use `actions-rs/grcov` only
- name: Pre-installing grcov
uses: actions-rs/install@v0.1
with:
crate: grcov
use-tool-cache: true
- name: Gather coverage data
id: coverage
uses: actions-rs/grcov@v0.1
with:
coveralls-token: ${{ secrets.COVERALLS_TOKEN }}
- name: Coveralls upload
uses: coverallsapp/github-action@master
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
parallel: true
path-to-lcov: ${{ steps.coverage.outputs.report }}
grcov_finalize:
runs-on: ubuntu-latest
needs: grcov
steps:
- name: Coveralls finalization
uses: coverallsapp/github-action@master
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
parallel-finished: true

110
.github/workflows/msrv.yml vendored Normal file
View file

@ -0,0 +1,110 @@
# Based on https://github.com/actions-rs/meta/blob/master/recipes/msrv.md
on: [push, pull_request]
name: MSRV
jobs:
check:
name: Check
runs-on: ubuntu-latest
strategy:
matrix:
rust:
- stable
- 1.31.0
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.rust }}
override: true
- name: Run cargo check
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: check
test:
name: Test Suite
runs-on: ubuntu-latest
strategy:
matrix:
rust:
- stable
- 1.31.0
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.rust }}
override: true
- name: Run cargo test
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: test
fmt:
name: Rustfmt
runs-on: ubuntu-latest
strategy:
matrix:
rust:
- stable
- 1.31.0
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.rust }}
override: true
- name: Install rustfmt
run: rustup component add rustfmt
- name: Run cargo fmt
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: fmt
args: --all -- --check
clippy:
name: Clippy
runs-on: ubuntu-latest
strategy:
matrix:
rust:
- stable
- 1.31.0
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.rust }}
override: true
- name: Install clippy
run: rustup component add clippy
- name: Run cargo clippy
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: clippy
args: -- -D warnings

78
.github/workflows/nightly_lints.yml vendored Normal file
View file

@ -0,0 +1,78 @@
on: [push, pull_request]
name: Nightly lints
jobs:
clippy:
name: Clippy
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install nightly toolchain with clippy available
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
components: clippy
- name: Run cargo clippy
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: clippy
args: -- -D warnings
rustfmt:
name: Format
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install nightly toolchain with rustfmt available
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
components: rustfmt
- name: Run cargo fmt
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: fmt
args: --all -- --check
combo:
name: Clippy + rustfmt
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install nightly toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
components: rustfmt, clippy
- name: Run cargo fmt
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: fmt
args: --all -- --check
- name: Run cargo clippy
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: clippy
args: -- -D warnings

79
.github/workflows/quickstart.yml vendored Normal file
View file

@ -0,0 +1,79 @@
# Based on https://github.com/actions-rs/meta/blob/master/recipes/quickstart.md
#
# While our "example" application has the platform-specific code,
# for simplicity we are compiling and testing everything on the Ubuntu environment only.
# For multi-OS testing see the `cross.yml` workflow.
on: [push, pull_request]
name: Quickstart
jobs:
check:
name: Check
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Run cargo check
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: check
test:
name: Test Suite
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Run cargo test
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: test
lints:
name: Lints
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy
- name: Run cargo fmt
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: fmt
args: --all -- --check
- name: Run cargo clippy
uses: actions-rs/cargo@v1
continue-on-error: true # WARNING: only for this example, remove it!
with:
command: clippy
args: -- -D warnings

51
.gitignore vendored
View file

@ -1,9 +1,13 @@
*.class
*.DS_Store
*.jar
*jar
*~
**/*.rs.bk
.s*
.*.sw*
*.rs.bak
*.bak
.hg/
.hgignore
.lein-deps-sum
@ -11,13 +15,16 @@
.lein-plugins/
.lein-repl-history
.nrepl-port
.bundle/
docs/vendor/
/.lein-*
/.nrepl-port
Cargo.lock
/checkouts/
/classes/
/node_modules/
/out/
/target/
/target
pom.xml
pom.xml.asc
/.cljs_node_repl/
@ -45,3 +52,45 @@ pom.xml.asc
/release-node/datomish/
/release-node/goog/
/release-node/honeysql/
/edn/target/
/fixtures/*.db-shm
/fixtures/*.db-wal
/query-parser/out/
## Build generated
/sdks/swift/Mentat/build/
/sdks/android/**/build
DerivedData
build.xcarchive
## Various settings
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
/sdks/swift/Mentat/*.xcodeproj/project.xcworkspace/xcuserdata
## Other
*.xccheckout
*.moved-aside
*.xcuserstate
*.xcscmblueprint
## Obj-C/Swift specific
*.hmap
*.ipa
/sdks/swift/Mentat/External-Dependencies
# Android & IntelliJ
**/*.iml
**/.idea
/sdks/android/**/local.properties
# Documentation
docs/_site
docs/.sass-cache
docs/.jekyll-metadata

1
.ignore Normal file
View file

@ -0,0 +1 @@
docs/

80
.taskcluster.yml Normal file
View file

@ -0,0 +1,80 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
version: 0
allowPullRequests: public
tasks:
####################################################################################################
# Task: Pull requests
####################################################################################################
- provisionerId: '{{ taskcluster.docker.provisionerId }}'
workerType: '{{ taskcluster.docker.workerType }}'
extra:
github:
env: true
events:
- pull_request.opened
- pull_request.edited
- pull_request.synchronize
- pull_request.reopened
- push
scopes:
- "queue:create-task:aws-provisioner-v1/github-worker"
- "queue:scheduler-id:taskcluster-github"
payload:
maxRunTime: 3600
deadline: "{{ '2 hours' | $fromNow }}"
image: 'mozillamobile/android-components:1.4'
command:
- /bin/bash
- '--login'
- '-cx'
- >-
export TERM=dumb
&& git fetch {{ event.head.repo.url }} {{ event.head.repo.branch }}
&& git config advice.detachedHead false
&& git checkout {{event.head.sha}}
&& python automation/taskcluster/decision_task_pull_request.py
features:
taskclusterProxy: true
metadata:
name: Mentat Android SDK - Pull Request
description: Building and testing the Mentat Android SDK - triggered by a pull request.
owner: '{{ event.head.user.email }}'
source: '{{ event.head.repo.url }}'
####################################################################################################
# Task: Release
####################################################################################################
- provisionerId: '{{ taskcluster.docker.provisionerId }}'
workerType: '{{ taskcluster.docker.workerType }}'
extra:
github:
events:
- release
scopes:
- "secrets:get:project/mentat/publish"
payload:
maxRunTime: 3600
deadline: "{{ '2 hours' | $fromNow }}"
image: 'mozillamobile/mentat:1.2'
command:
- /bin/bash
- '--login'
- '-cx'
- >-
export TERM=dumb
&& git fetch origin --tags
&& git config advice.detachedHead false
&& git checkout {{ event.version }}
&& python automation/taskcluster/release/fetch-bintray-api-key.py
&& cd sdks/android/Mentat
&& ./gradlew --no-daemon clean library:assembleRelease
&& VCS_TAG=`git show-ref {{ event.version }}` ./gradlew bintrayUpload --debug -PvcsTag="$VCS_TAG"
features:
taskclusterProxy: true
metadata:
name: Mentat Android SDK - Release ({{ event.version }})
description: Building and publishing release versions.
owner: '{{ event.head.user.email }}'
source: '{{ event.head.repo.url }}'

View file

@ -1,14 +1,78 @@
language: clojure
lein: lein2
jdk:
- oraclejdk8
lein: 2.7.0
clojure: 1.8.0
language: rust
env:
- TRAVIS_NODE_VERSION="6.2.2"
install:
- nvm install $TRAVIS_NODE_VERSION
- nvm use $TRAVIS_NODE_VERSION
- lein deps
- npm install
script: lein cljsbuild once release-node && lein with-profile node install && lein test
- CARGO_INCREMENTAL=0
# https://bheisler.github.io/post/efficient-use-of-travis-ci-cache-for-rust/
before_cache:
# Delete loose files in the debug directory
- find ./target/debug -maxdepth 1 -type f -delete
# Delete the test and benchmark executables. Finding these all might take some
# experimentation.
- rm -rf ./target/debug/deps/criterion*
- rm -rf ./target/debug/deps/bench*
# Delete the associated metadata files for those executables
- rm -rf ./target/debug/.fingerprint/criterion*
- rm -rf ./target/debug/.fingerprint/bench*
# Note that all of the above need to be repeated for `release/` instead of
# `debug/` if your build script builds artifacts in release mode.
# This is just more metadata
- rm -f ./target/.rustc_info.json
# Also delete the saved benchmark data from the test benchmarks. If you
# have Criterion.rs benchmarks, you'll probably want to do this as well, or set
# the CRITERION_HOME environment variable to move that data out of the
# `target/` directory.
- rm -rf ./target/criterion
# Also delete cargo's registry index. This is updated on every build, but it's
# way cheaper to re-download than the whole cache is.
- rm -rf "$TRAVIS_HOME/.cargo/registry/index/"
- rm -rf "$TRAVIS_HOME/.cargo/registry/src"
cache:
directories:
- ./target
- $TRAVIS_HOME/.cache/sccache
- $TRAVIS_HOME/.cargo/
- $TRAVIS_HOME/.rustup/
before_script:
- cargo install --force cargo-audit
- cargo generate-lockfile
- rustup component add clippy-preview
script:
- cargo audit
# We use OSX so that we can get a reasonably up to date version of SQLCipher.
# (The version in Travis's default Ubuntu Trusty is much too old).
os: osx
before_install:
- brew install sqlcipher
rust:
- 1.43.0
- 1.44.0
- 1.45.0
- 1.46.0
- 1.47.0
- stable
- beta
- nightly
matrix:
allow_failures:
- rust: nightly
fast_finish: true
jobs:
include:
- stage: "Test iOS"
rust: 1.47.0
script: ./scripts/test-ios.sh
- stage: "Docs"
rust: 1.47.0
script: ./scripts/cargo-doc.sh
script:
- cargo build --verbose --all
- cargo clippy --all-targets --all-features -- -D warnings -A clippy::comparison-chain -A clippy::many-single-char-names # Check tests and non-default crate features.
- cargo test --verbose --all
- cargo test --features edn/serde_support --verbose --all
# We can't pick individual features out with `cargo test --all` (At the time of this writing, this
# works but does the wrong thing because of a bug in cargo, but its fix will be to disallow doing
# this all-together, see https://github.com/rust-lang/cargo/issues/5364 for more information). To
# work around this, we run tests individually for sub-crates that rely on `rusqlite`.
- |
for crate in "" "db" "db-traits" "ffi" "public-traits" "query-projector" "query-projector-traits" "query-pull" "sql" "tolstoy" "tolstoy-traits" "transaction" "tools/cli"; do
cargo test --manifest-path ./$crate/Cargo.toml --verbose --no-default-features --features sqlcipher
done

25
.vscode/launch.json vendored Normal file
View file

@ -0,0 +1,25 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "attach",
"name": "Attach to CLI (debug)",
"program": "${workspaceFolder}/target/debug/mentat_cli"
},
{
"type": "lldb",
"request": "launch",
"name": "CLI",
"stdio": "*",
"program": "${workspaceFolder}/target/debug/mentat_cli",
"args": [],
"cwd": "${workspaceFolder}",
"internalConsoleOptions": "openOnSessionStart",
"terminal": "integrated"
},
]
}

30
.vscode/settings.json vendored Normal file
View file

@ -0,0 +1,30 @@
// Place your settings in this file to overwrite default and user settings.
{
// Newline at EOF.
"files.insertFinalNewline": true,
// Configure glob patterns for excluding files and folders.
"files.exclude": {
"**/.git": true,
"**/.svn": true,
"**/.hg": true,
"**/CVS": true,
"**/.DS_Store": true,
"**/.*.sw*": true, // Vim swap files.
"**/*~": true, // Vim backup files.
".cljs_*_repl": true,
".gitignore": true,
".lein*": true,
".sw*": true,
"target": true, // Top-level build output.
"*/target": true // Sub-crate build output.
},
"rust.customTestConfigurations": [
{
"title": "All Crates",
"args": [
"--all"
]
}
]
}

58
.vscode/tasks.json vendored Normal file
View file

@ -0,0 +1,58 @@
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "Test all",
"command": "cargo",
"args": [
"test",
"--all",
],
"problemMatcher": [
"$rustc"
],
"group": "test"
},
{
"type": "shell",
"label": "Run CLI",
"command": "cargo",
"args": [
"debugcli",
],
"problemMatcher": [
"$rustc"
],
"group": "test"
},
{
"type": "shell",
"label": "Build CLI",
"command": "cargo",
"args": [
"build",
"-p",
"mentat_cli",
],
"problemMatcher": [
"$rustc"
],
"group": "build"
},
{
"type": "shell",
"label": "Build Mentat",
"command": "cargo",
"args": [
"build",
],
"problemMatcher": [
"$rustc"
],
"group": "build"
}
]
}

39
CHANGELOG.md Normal file
View file

@ -0,0 +1,39 @@
# 0.11.1 (2018-08-09)
* sdks/android compiled against:
* Kotlin standard library 1.2.41
* **API changes**: Changed wording of MentatError::ConflictingAttributeDefinitions, MentatError::ExistingVocabularyTooNew, MentatError::UnexpectedCoreSchema.
* [Commits](https://github.com/mozilla/mentat/compare/v0.11.0...v0.11.1)
# 0.11 (2018-07-31)
* sdks/android compiled against:
* Kotlin standard library 1.2.41
* **sdks/android**: `Mentat()` constructor replaced with `open` factory method.
* [Commits](https://github.com/mozilla/mentat/compare/v0.10.0...v0.11.0)
# 0.10 (2018-07-26)
* sdks/android compiled against:
* Kotlin standard library 1.2.41
* **API changes**:
* `store_open{_encrypted}` now accepts an error parameter; corresponding constructors changed to be factory functions.
* [Commits](https://github.com/mozilla/mentat/compare/v0.9.0...v0.10.0)
# 0.9 (2018-07-25)
* sdks/android compiled against:
* Kotlin standard library 1.2.41
* **API changes**:
* Mentat partitions now enforce their integrity, denying entids that aren't already known.
* **sdks/android**: First version published to nalexander's personal bintray repository.
* Various bugfixes and refactorings (see commits below for details)
* [Commits](https://github.com/mozilla/mentat/compare/v0.8.1...v0.9.0)

118
Cargo.toml Normal file
View file

@ -0,0 +1,118 @@
[package]
edition = "2021"
authors = [
"Richard Newman <rnewman@twinql.com>",
"Nicholas Alexander <nalexander@mozilla.com>",
"Victor Porof <vporof@mozilla.com>",
"Jordan Santell <jsantell@mozilla.com>",
"Joe Walker <jwalker@mozilla.com>",
"Emily Toop <etoop@mozilla.com>",
"Grisha Kruglov <grigory@kruglov.ca>",
"Kit Cambridge <kit@yakshaving.ninja>",
"Edouard Oger <eoger@fastmail.com>",
"Thom Chiovoloni <tchiovoloni@mozilla.com>",
"Gregory Burd <greg@burd.me>",
]
name = "mentat"
version = "0.14.0"
build = "build/version.rs"
[features]
default = ["bundled_sqlite3", "syncable"]
bundled_sqlite3 = ["rusqlite/bundled"]
sqlcipher = ["rusqlite/sqlcipher", "mentat_db/sqlcipher"]
syncable = ["mentat_tolstoy", "tolstoy_traits", "mentat_db/syncable"]
[workspace]
members = [
"tools/cli",
"ffi", "core", "core-traits","db", "db-traits", "edn", "public-traits", "query-algebrizer",
"query-algebrizer-traits", "query-projector", "query-projector-traits","query-pull",
"query-sql", "sql", "sql-traits", "tolstoy-traits", "tolstoy", "transaction"
]
[build-dependencies]
rustc_version = "~0.4"
[dev-dependencies]
assert_approx_eq = "~1.1"
#[dev-dependencies.cargo-husky]
#version = "1"
#default-features = false # Disable features which are enabled by default
#features = ["run-for-all", "precommit-hook", "run-cargo-fmt", "run-cargo-test", "run-cargo-check", "run-cargo-clippy"]
#cargo audit
#cargo outdated
[dependencies]
chrono = "~0.4"
failure = "~0.1"
lazy_static = "~1.4"
time = "0.3.1"
log = "~0.4"
uuid = { version = "~1", features = ["v4", "serde"] }
[dependencies.rusqlite]
version = "~0.29"
features = ["limits", "bundled"]
[dependencies.edn]
path = "edn"
[dependencies.core_traits]
path = "core-traits"
[dependencies.mentat_core]
path = "core"
[dependencies.mentat_sql]
path = "sql"
[dependencies.mentat_db]
path = "db"
[dependencies.db_traits]
path = "db-traits"
[dependencies.mentat_query_algebrizer]
path = "query-algebrizer"
[dependencies.query_algebrizer_traits]
path = "query-algebrizer-traits"
[dependencies.mentat_query_projector]
path = "query-projector"
[dependencies.query_projector_traits]
path = "query-projector-traits"
[dependencies.mentat_query_pull]
path = "query-pull"
[dependencies.query_pull_traits]
path = "query-pull-traits"
[dependencies.mentat_query_sql]
path = "query-sql"
[dependencies.sql_traits]
path = "sql-traits"
[dependencies.public_traits]
path = "public-traits"
[dependencies.mentat_transaction]
path = "transaction"
[dependencies.mentat_tolstoy]
path = "tolstoy"
optional = true
[dependencies.tolstoy_traits]
path = "tolstoy-traits"
optional = true
[profile.release]
opt-level = 3
debug = false
lto = true

11
Makefile Normal file
View file

@ -0,0 +1,11 @@
.PHONY: outdated fix
outdated:
for p in $(dirname $(ls Cargo.toml */Cargo.toml */*/Cargo.toml)); do echo $p; (cd $p; cargo outdated -R); done
fix:
$(for p in $(dirname $(ls Cargo.toml */Cargo.toml */*/Cargo.toml)); do echo $p; (cd $p; cargo fix --allow-dirty --broken-code --edition-idioms); done)
upgrades:
cargo upgrades

29
NOTES Normal file
View file

@ -0,0 +1,29 @@
* sqlite -> monetdb-lite-c + fts5 + bayesdb
* fts5 + regex + tre/fuzzy + codesearch/trigram filters, streaming bloom filters https://arxiv.org/abs/2001.03147
* datalog to "goblin relational engine" (gtk)
* branching distributed wal (chain replication) and CRDTs
* alf:fn query language
* datatypes via bit syntax+some code?
* pure lang?
* https://github.com/dahjelle/pouch-datalog
* https://github.com/edn-query-language/eql
* https://github.com/borkdude/jet
* https://github.com/walmartlabs/dyn-edn
* https://github.com/go-edn/edn
* https://github.com/smothers/cause
* https://github.com/oscaro/eq
* https://github.com/clojure-emacs/parseedn
* https://github.com/exoscale/seql
* https://github.com/axboe/liburing
* (EAVtf) - entity attribute value type flags
* distributed, replicated WAL
* https://github.com/mirage/irmin
* What if facts had "confidence" [0-1)?
* entity attribute value type flags
* https://github.com/probcomp/BayesDB
* https://github.com/probcomp/bayeslite
* http://probcomp.csail.mit.edu/software/bayesdb/

287
README.md
View file

@ -1,20 +1,21 @@
# Datomish
# Project Mentat
[![Build Status](https://travis-ci.org/qpdb/mentat.svg?branch=master)](https://travis-ci.org/qpdb/mentat)
Datomish is a persistent, embedded knowledge base. It's written in ClojureScript, and draws heavily on [DataScript](https://github.com/tonsky/datascript) and [Datomic](http://datomic.com).
Project Mentat is a persistent, embedded knowledge base. It draws heavily on [DataScript](https://github.com/tonsky/datascript) and [Datomic](http://datomic.com).
Datomish compiles into a single JavaScript file, and is usable both in Node (on top of `promise_sqlite`) and in Firefox (on top of `Sqlite.jsm`). It also works in pure Clojure on the JVM on top of `jdbc-sqlite`.
This project was started by Mozilla, but [is no longer being developed or actively maintained by them](https://mail.mozilla.org/pipermail/firefox-dev/2018-September/006780.html). [Their repository](https://github.com/mozilla/mentat) was marked read-only, [this fork](https://github.com/qpdb/mentat) is an attempt to revive and continue that interesting work. We owe the team at Mozilla more than words can express for inspiring us all and for this project in particular.
There's an example Firefox restartless add-on in the [`addon`](https://github.com/mozilla/datomish/tree/master/addon) directory; build instructions are below.
*Thank you*.
We are in the early stages of reimplementing Datomish in [Rust](https://www.rust-lang.org/). You can follow that work in [its long-lived branch](https://github.com/mozilla/datomish/tree/rust), and issue #133.
[Documentation](https://docs.rs/mentat)
---
## Motivation
Datomish is intended to be a flexible relational (not key-value, not document-oriented) store that doesn't leak its storage schema to users, and doesn't make it hard to grow its domain schema and run arbitrary queries.
Mentat is a flexible relational (not key-value, not document-oriented) store that makes it easy to describe, grow, and reuse your domain schema.
Our short-term goal for Project Mentat is to build a system that, as the basis for a User Agent Service, can support multiple [Tofino](https://github.com/mozilla/tofino) UX experiments without having a storage engineer do significant data migration, schema work, or revving of special-purpose endpoints.
By abstracting away the storage schema, and by exposing change listeners outside the database (not via triggers), we hope to allow both the data store itself and embedding applications to use better architectures, meeting performance goals in a way that allows future evolution.
By abstracting away the storage schema, and by exposing change listeners outside the database (not via triggers), we hope to make domain schemas stable, and allow both the data store itself and embedding applications to use better architectures, meeting performance goals in a way that allows future evolution.
## Data storage is hard
@ -24,22 +25,22 @@ We've observed that data storage is a particular area of difficulty for software
- Model their domain entities and relationships.
- Encode that model _efficiently_ and _correctly_ using the features available in the database.
- Plan for future extensions and performance tuning.
In a SQL database, the same schema definition defines everything from high-level domain relationships through to numeric field sizes in the same smear of keywords. It's difficult for someone unfamiliar with the domain to determine from such a schema what's a domain fact and what's an implementation concession — are all part numbers always 16 characters long, or are we trying to save space? — or, indeed, whether a missing constraint is deliberate or a bug.
The developer must think about foreign key constraints, compound uniqueness, and nullability. They must consider indexing, synchronizing, and stable identifiers. Most developers simply don't do enough work in SQL to get all of these things right. Storage thus becomes the specialty of a few individuals.
Which one of these is correct?
```edn
{:db/id :person/email
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many ; People can have multiple email addresses.
:db/unique :db.unique/identity ; For our purposes, each email identifies one person.
:db/index true} ; We want fast lookups by email.
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many ; People can have multiple email addresses.
:db/unique :db.unique/identity ; For our purposes, each email identifies one person.
:db/index true} ; We want fast lookups by email.
{:db/id :person/friend
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many} ; People can have many friends.
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many} ; People can have many friends.
```
```sql
CREATE TABLE people (
@ -52,7 +53,7 @@ We've observed that data storage is a particular area of difficulty for software
FOREIGN KEY friend REFERENCES people(id), -- Bug: no compound uniqueness constraint, so we can have dupe friendships.
);
```
They both have limitations — the Mentat schema allows only for an open world (it's possible to declare friendships with people whose email isn't known), and requires validation code to enforce email string correctness — but we think that even such a tiny SQL example is harder to understand and obscures important domain decisions.
- Queries are intimately tied to structural storage choices. That not only hides the declarative domain-level meaning of the query — it's hard to tell what a query is trying to do when it's a 100-line mess of subqueries and `LEFT OUTER JOIN`s — but it also means a simple structural schema change requires auditing _every query_ for correctness.
@ -70,21 +71,23 @@ We've observed that data storage is a particular area of difficulty for software
## Comparison to DataScript
DataScript asks the question: "What if creating a database would be as cheap as creating a Hashmap?"
DataScript asks the question: "What if creating a database were as cheap as creating a Hashmap?"
Datomish is not interested in that. Instead, it's strongly interested in persistence and performance, with very little interest in immutable databases/databases as values or throwaway use.
Mentat is not interested in that. Instead, it's focused on persistence and performance, with very little interest in immutable databases/databases as values or throwaway use.
One might say that Datomish's question is: "What if an SQLite database could store arbitrary relations, for arbitrary consumers, without them having to coordinate an up-front storage-level schema?"
One might say that Mentat's question is: "What if a database could store arbitrary relations, for arbitrary consumers, without them having to coordinate an up-front storage-level schema?"
Consider this a practical approach to facts, to knowledge its storage and access, much like SQLite is a practical RDBMS.
(Note that [domain-level schemas are very valuable](http://martinfowler.com/articles/schemaless/).)
Another possible question would be: "What if we could bake some of the concepts of CQRS and event sourcing into a persistent relational store, such that the transaction log itself were of value to queries?"
Another possible question would be: "What if we could bake some of the concepts of [CQRS and event sourcing](http://www.baeldung.com/cqrs-event-sourced-architecture-resources) into a persistent relational store, such that the transaction log itself were of value to queries?"
Some thought has been given to how databases as values — long-term references to a snapshot of the store at an instant in time — could work in this model. It's not impossible; it simply has different performance characteristics.
Just like DataScript, Datomish speaks Datalog for querying and takes additions and retractions as input to a transaction. Unlike DataScript, Datomish's API is asynchronous.
Just like DataScript, Mentat speaks Datalog for querying and takes additions and retractions as input to a transaction.
Unlike DataScript, Datomish exposes free-text indexing, thanks to SQLite.
Unlike DataScript, Mentat exposes free-text indexing, thanks to SQLite/FTS.
## Comparison to Datomic
@ -93,159 +96,145 @@ Datomic is a server-side, enterprise-grade data storage system. Datomic has a be
Many of these design decisions are inapplicable to deployed desktop software; indeed, the use of multiple JVM processes makes Datomic's use in a small desktop app, or a mobile device, prohibitive.
Datomish is designed for embedding, initially in an Electron app ([Tofino](https://github.com/mozilla/tofino)). It is less concerned with exposing consistent database states outside transaction boundaries, because that's less important here, and dropping some of these requirements allows us to leverage SQLite itself.
## Comparison to SQLite
SQLite is a traditional SQL database in most respects: schemas conflate semantic, structural, and datatype concerns, as described above; the main interface with the database is human-first textual queries; sparse and graph-structured data are 'unnatural', if not always inefficient; experimenting with and evolving data models are error-prone and complicated activities; and so on.
Datomish aims to offer many of the advantages of SQLite — single-file use, embeddability, and good performance — while building a more relaxed and expressive data model on top.
Mentat aims to offer many of the advantages of SQLite — single-file use, embeddability, and good performance — while building a more relaxed, reusable, and expressive data model on top.
---
## Contributing
Please note that this project is released with a Contributor Code of Conduct.
By participating in this project you agree to abide by its terms.
See [CONTRIBUTING.md](/CONTRIBUTING.md) for further notes.
See [CONTRIBUTING.md](CONTRIBUTING.md) for further notes.
This project is very new, so we'll probably revise these guidelines. Please
comment on an issue before putting significant effort in if you'd like to
contribute.
---
## License
## Building
Datomish is currently licensed under the Apache License v2.0. See the `LICENSE` file for details.
You first need to clone the project. To build and test the project, we are using [Cargo](https://crates.io/install).
Datomish includes some code from DataScript, kindly relicensed by Nikita Prokopov.
To build all of the crates in the project use:
````
cargo build
````
To run tests use:
````
# Run tests for everything.
cargo test --all
# Run tests for just the query-algebrizer folder (specify the crate, not the folder),
# printing debug output.
cargo test -p mentat_query_algebrizer -- --nocapture
````
For most `cargo` commands you can pass the `-p` argument to run the command just on that package. So, `cargo build -p mentat_query_algebrizer` will build just the "query-algebrizer" folder.
## What are all of these crates?
We use multiple sub-crates for Mentat for four reasons:
1. To improve incremental build times.
2. To encourage encapsulation; writing `extern crate` feels worse than just `use mod`.
3. To simplify the creation of targets that don't use certain features: _e.g._, a build with no syncing, or with no query system.
4. To allow for reuse (_e.g._, the EDN parser is essentially a separate library).
So what are they?
### Building blocks
#### `edn`
Our EDN parser. It uses `rust-peg` to parse [EDN](https://github.com/edn-format/edn), which is Clojure/Datomic's richer alternative to JSON. `edn`'s dependencies are all either for representing rich values (`chrono`, `uuid`, `ordered-float`) or for parsing (`serde`, `peg`).
In addition, this crate turns a stream of EDN values into a representation suitable to be transacted.
#### `mentat_core`
This is the lowest-level Mentat crate. It collects together the following things:
- Fundamental domain-specific data structures like `ValueType` and `TypedValue`.
- Fundamental SQL-related linkages like `SQLValueType`. These encode the mapping between Mentat's types and values and their representation in our SQLite format.
- Conversion to and from EDN types (_e.g._, `edn::Keyword` to `TypedValue::Keyword`).
- Common utilities (some in the `util` module, and others that should be moved there or broken out) like `Either`, `InternSet`, and `RcCounter`.
- Reusable lazy namespaced keywords (_e.g._, `DB_TYPE_DOUBLE`) that are used by `mentat_db` and EDN serialization of core structs.
### Types
#### `mentat_query`
This crate defines the structs and enums that are the output of the query parser and used by the translator and algebrizer. `SrcVar`, `NonIntegerConstant`, `FnArg`… these all live here.
#### `mentat_query_sql`
Similarly, this crate defines an abstract representation of a SQL query as understood by Mentat. This bridges between Mentat's types (_e.g._, `TypedValue`) and SQL concepts (`ColumnOrExpression`, `GroupBy`). It's produced by the algebrizer and consumed by the translator.
### Query processing
#### `mentat_query_algebrizer`
This is the biggest piece of the query engine. It takes a parsed query, which at this point is _independent of a database_, and combines it with the current state of the schema and data. This involves translating keywords into attributes, abstract values into concrete values with a known type, and producing an `AlgebraicQuery`, which is a representation of how a query's Datalog semantics can be satisfied as SQL table joins and constraints over Mentat's SQL schema. An algebrized query is tightly coupled with both the disk schema and the vocabulary present in the store when the work is done.
#### `mentat_query_projector`
A Datalog query _projects_ some of the variables in the query into data structures in the output. This crate takes an algebrized query and a projection list and figures out how to get values out of the running SQL query and into the right format for the consumer.
#### `mentat_query_translator`
This crate works with all of the above to turn the output of the algebrizer and projector into the data structures defined in `mentat_query_sql`.
#### `mentat_sql`
This simple crate turns those data structures into SQL text and bindings that can later be executed by `rusqlite`.
### The data layer: `mentat_db`
This is a big one: it implements the core storage logic on top of SQLite. This crate is responsible for bootstrapping new databases, transacting new data, maintaining the attribute cache, and building and updating in-memory representations of the storage schema.
### The main crate
The top-level main crate of Mentat assembles these component crates into something useful. It wraps up a connection to a database file and the associated metadata into a `Store`, and encapsulates an in-progress transaction (`InProgress`). It provides modules for programmatically writing (`entity_builder.rs`) and managing vocabulary (`vocabulary.rs`).
### Syncing
Sync code lives, for [referential reasons](https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying), in a crate named `tolstoy`. This code is a work in progress; current state is a proof-of-concept implementation which largely relies on the internal transactor to make progress in most cases and comes with a basic support for timelines. See [Tolstoy's documentation](https://github.com/mozilla/mentat/tree/master/tolstoy/README.md) for details.
### The command-line interface
This is under `tools/cli`. It's essentially an external consumer of the main `mentat` crate. This code is ugly, but it mostly works.
---
## SQLite dependencies
Datomish uses partial indices, which are available in SQLite 3.8.0 and higher.
Mentat uses partial indices, which are available in SQLite 3.8.0 and higher. It relies on correlation between aggregate and non-aggregate columns in the output, which was added in SQLite 3.7.11.
It also uses FTS4, which is [a compile time option](http://www.sqlite.org/fts3.html#section_2).
By default, Mentat specifies the `"bundled"` feature for `rusqlite`, which uses a relatively recent
version of SQLite. If you want to link against the system version of SQLite, omit `"bundled_sqlite3"`
from Mentat's features.
## Prep
You'll need [Leiningen](http://leiningen.org).
```
# If you use nvm.
nvm use 6
lein deps
npm install
# If you want a decent REPL.
brew install rlwrap
```toml
[dependencies.mentat]
version = "0.6"
# System sqlite is known to be new.
default-features = false
```
## Running a REPL
---
### Starting a ClojureScript REPL from the terminal
## License
```
rlwrap lein run -m clojure.main repl.clj
```
### Connecting to a ClojureScript environment from Vim
You'll need `vim-fireplace`. Install using Pathogen.
First, start a Clojure REPL with an nREPL server. Then load our ClojureScript REPL and dependencies. Finally, connect to it from Vim.
```
$ lein repl
nREPL server started on port 62385 on host 127.0.0.1 - nrepl://127.0.0.1:62385
REPL-y 0.3.7, nREPL 0.2.10
Clojure 1.8.0
Java HotSpot(TM) 64-Bit Server VM 1.8.0_60-b27
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=> (load-file "repl.clj")
Reading analysis cache for jar:file:/Users/rnewman/.m2/repository/org/clojure/clojurescript/1.9.89/clojurescript-1.9.89.jar!/cljs/core.cljs
Compiling out/cljs/nodejs.cljs
Compiling src/datomish/sqlite.cljs
Compiling src/datomish/core.cljs
ClojureScript Node.js REPL server listening on 57134
Watch compilation log available at: out/watch.log
To quit, type: :cljs/quit
cljs.user=>
```
in Vim, in the working directory:
```
:Piggieback (cljs.repl.node/repl-env)
```
Now you can use `:Eval`, `cqc`, and friends to evaluate code. Fireplace should connect automatically, but if it doesn't:
```
:Connect nrepl://localhost:62385
```
## To run tests in ClojureScript
Run `lein doo node test once`, or `lein doo node test auto` to re-run on file changes.
## To run tests in Clojure
Run `lein test`.
## To run smoketests with the built release library in a Node environment
```
# Build.
lein cljsbuild once release-node
npm run test
```
## To build for a Firefox add-on
```
lein cljsbuild once release-browser
```
### To build and run the example Firefox add-on:
First build as above, so that `datomish.js` exists.
Then:
```
cd addon
./build.sh
cd release
./run.sh
```
## Preparing an NPM release
The intention is that the `target/release-node/` directory is roughly the shape of an npm-ready JavaScript package.
To generate a require/import-ready `target/release-node/datomish.js`, run
```
lein cljsbuild once release-node
```
To verify that importing into Node.js succeeds, run
```
npm run test
```
## To locally install for ClojureScript use
```
lein with-profile node install
```
Many thanks to ([David Nolen](https://github.com/swannodette)) and ([Nikita Prokopov](https://github.com/tonsky)) for demonstrating how to package ClojureScript for distribution via npm.
Project Mentat is currently licensed under the Apache License v2.0. See the `LICENSE` file for details.

View file

@ -1,6 +0,0 @@
{
"presets": ["es2015"],
"plugins": [
"transform-async-to-generator"
]
}

View file

@ -1,3 +0,0 @@
Icon file is "Line Graph" by Cris Dobbins, from The Noun Project.
https://thenounproject.com/term/line-graph/145324/

View file

@ -1,3 +0,0 @@
cp ../target/release-browser/datomish.js release/
node_modules/.bin/webpack -p
cat src/wrapper.prefix built/index.js > release/index.js

View file

@ -1,23 +0,0 @@
{
"name": "datomish-example",
"version": "1.0.0",
"description": "A test add-on for Datomish and Firefox.",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "MPL-2.0",
"devDependencies": {
"babel": "^6.5.2",
"babel-cli": "^6.14.0",
"babel-core": "^6.14.0",
"babel-loader": "^6.2.5",
"babel-plugin-transform-async-to-generator": "^6.8.0",
"babel-preset-es2015": "^6.14.0",
"webpack": "^1.13.2"
},
"dependencies": {
"babel-polyfill": "^6.13.0"
}
}

View file

@ -1,2 +0,0 @@
#Datomish Test
An example add-on that loads Datomish on top of Sqlite.jsm.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

View file

@ -1,15 +0,0 @@
{
"title": "Datomish Test",
"name": "datomish-test",
"version": "0.0.1",
"description": "An example add-on that loads Datomish on top of Sqlite.jsm.",
"main": "index.js",
"author": "Richard Newman <rnewman@mozilla.com>",
"engines": {
"firefox": ">=48.0a1"
},
"license": "MPL-2.0",
"keywords": [
"jetpack"
]
}

View file

@ -1 +0,0 @@
jpm run -b /Applications/FirefoxNightly.app/

View file

@ -1,92 +0,0 @@
/* Any copyright is dedicated to the Public Domain.
http://creativecommons.org/publicdomain/zero/1.0/ */
var self = require("sdk/self");
var buttons = require('sdk/ui/button/action');
var tabs = require('sdk/tabs');
var datomish = require("datomish.js");
var schema = {
"name": "pages",
"attributes": [
{"name": "page/url",
"type": "string",
"cardinality": "one",
"unique": "identity",
"doc": "A page's URL."},
{"name": "page/title",
"type": "string",
"cardinality": "one",
"fulltext": true,
"doc": "A page's title."},
{"name": "page/content",
"type": "string",
"cardinality": "one", // Simple for now.
"fulltext": true,
"doc": "A snapshot of the page's content. Should be plain text."},
]
};
async function initDB(path) {
let db = await datomish.open(path);
await db.ensureSchema(schema);
return db;
}
async function findURLs(db) {
let query = `[:find ?page ?url ?title :in $ :where [?page :page/url ?url][(get-else $ ?page :page/title "") ?title]]`;
let options = new Object();
options["limit"] = 10;
return datomish.q(db.db(), query, options);
}
async function findPagesMatching(db, string) {
let query =
`[:find ?url ?title
:in $ ?str
:where
[(fulltext $ :any ?str) [[?page]]]
[?page :page/url ?url]
[(get-else $ ?page :page/title "") ?title]]`;
return datomish.q(db.db(), query, {"limit": 10, "inputs": {"str": string}});
}
async function savePage(db, url, title, content) {
let datom = {"db/id": 55, "page/url": url};
if (title) {
datom["page/title"] = title;
}
if (content) {
datom["page/content"] = content;
}
let txResult = await db.transact([datom]);
return txResult;
}
async function handleClick(state) {
let db = await datomish.open("/tmp/testing.db");
await db.ensureSchema(schema);
let txResult = await savePage(db, tabs.activeTab.url, tabs.activeTab.title, "Content goes here");
console.log("Transaction returned " + JSON.stringify(txResult));
console.log("Transaction instant: " + txResult.txInstant);
let results = await findURLs(db);
results = results.map(r => r[1]);
console.log("Query results: " + JSON.stringify(results));
let pages = await findPagesMatching(db, "goes");
console.log("Pages: " + JSON.stringify(pages));
await db.close();
}
var button = buttons.ActionButton({
id: "datomish-save",
label: "Save Page",
icon: "./datomish-48.png",
onClick: handleClick
});

View file

@ -1,4 +0,0 @@
// Monkeypatch.
var { setTimeout } = require("sdk/timers");
this.setTimeout = setTimeout;

View file

@ -1,21 +0,0 @@
module.exports = {
entry: ['babel-polyfill', './src/index.js'],
output: {
filename: 'built/index.js'
},
target: 'webworker',
externals: {
'datomish.js': 'commonjs datomish.js',
'sdk/self': 'commonjs sdk/self',
'sdk/ui/button/action': 'commonjs sdk/ui/button/action',
'sdk/tabs': 'commonjs sdk/tabs'
},
module: {
loaders: [{
test: /\.js?$/,
exclude: /(node_modules)|(wrapper.prefix)/,
loader: 'babel'
}]
}
}

View file

@ -0,0 +1,36 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
FROM mozillamobile/rust-component:buildtools-27.0.3-ndk-r17b-ndk-version-26-rust-stable-rust-beta
MAINTAINER Nick Alexander "nalexander@mozilla.com"
#----------------------------------------------------------------------------------------------------------------------
#-- Project -----------------------------------------------------------------------------------------------------------
#----------------------------------------------------------------------------------------------------------------------
ENV PROJECT_REPOSITORY "https://github.com/mozilla/mentat.git"
RUN git clone $PROJECT_REPOSITORY
WORKDIR /build/mentat
# Temporary.
RUN git fetch origin master && git checkout origin/generic-automation-images && git show-ref HEAD
# Populate dependencies.
RUN ./sdks/android/Mentat/gradlew --no-daemon -p sdks/android/Mentat tasks
# Build Rust.
RUN ./sdks/android/Mentat/gradlew --no-daemon -p sdks/android/Mentat cargoBuild
# Actually build. In the future, we might also lint (to cache additional dependencies).
RUN ./sdks/android/Mentat/gradlew --no-daemon -p sdks/android/Mentat assemble test
#----------------------------------------------------------------------------------------------------------------------
# -- Cleanup ----------------------------------------------------------------------------------------------------------
#----------------------------------------------------------------------------------------------------------------------
# Drop built Rust artifacts.
RUN cargo clean

View file

@ -0,0 +1,81 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
FROM mozillamobile/android-components:1.4
MAINTAINER Nick Alexander "nalexander@mozilla.com"
#----------------------------------------------------------------------------------------------------------------------
#-- Configuration -----------------------------------------------------------------------------------------------------
#----------------------------------------------------------------------------------------------------------------------
ENV ANDROID_NDK_VERSION "r17b"
#----------------------------------------------------------------------------------------------------------------------
#-- System ------------------------------------------------------------------------------------------------------------
#----------------------------------------------------------------------------------------------------------------------
RUN apt-get update -qq
#----------------------------------------------------------------------------------------------------------------------
#-- Android NDK (Android SDK comes from base `android-components` image) ----------------------------------------------
#----------------------------------------------------------------------------------------------------------------------
RUN mkdir -p /build
WORKDIR /build
ENV ANDROID_NDK_HOME /build/android-ndk
RUN curl -L https://dl.google.com/android/repository/android-ndk-${ANDROID_NDK_VERSION}-linux-x86_64.zip > ndk.zip \
&& unzip ndk.zip -d /build \
&& rm ndk.zip \
&& mv /build/android-ndk-${ANDROID_NDK_VERSION} ${ANDROID_NDK_HOME}
ENV ANDROID_NDK_TOOLCHAIN_DIR /build/android-ndk-toolchain
ENV ANDROID_NDK_API_VERSION 26
RUN set -eux; \
python "$ANDROID_NDK_HOME/build/tools/make_standalone_toolchain.py" --arch="arm" --api="$ANDROID_NDK_API_VERSION" --install-dir="$ANDROID_NDK_TOOLCHAIN_DIR/arm-$ANDROID_NDK_API_VERSION" --force; \
python "$ANDROID_NDK_HOME/build/tools/make_standalone_toolchain.py" --arch="arm64" --api="$ANDROID_NDK_API_VERSION" --install-dir="$ANDROID_NDK_TOOLCHAIN_DIR/arm64-$ANDROID_NDK_API_VERSION" --force; \
python "$ANDROID_NDK_HOME/build/tools/make_standalone_toolchain.py" --arch="x86" --api="$ANDROID_NDK_API_VERSION" --install-dir="$ANDROID_NDK_TOOLCHAIN_DIR/x86-$ANDROID_NDK_API_VERSION" --force
#----------------------------------------------------------------------------------------------------------------------
#-- Rust (cribbed from https://github.com/rust-lang-nursery/docker-rust/blob/ced83778ec6fea7f63091a484946f95eac0ee611/1.27.1/stretch/Dockerfile)
#-- Rust is after the Android NDK since Rust rolls forward more frequently. Both stable and beta for advanced consumers.
#----------------------------------------------------------------------------------------------------------------------
ENV RUSTUP_HOME=/usr/local/rustup \
CARGO_HOME=/usr/local/cargo \
PATH=/usr/local/cargo/bin:$PATH \
RUST_VERSION=1.27.1
RUN set -eux; \
rustArch='x86_64-unknown-linux-gnu'; rustupSha256='4d382e77fd6760282912d2d9beec5e260ec919efd3cb9bdb64fe1207e84b9d91'; \
url="https://static.rust-lang.org/rustup/archive/1.12.0/${rustArch}/rustup-init"; \
wget "$url"; \
echo "${rustupSha256} *rustup-init" | sha256sum -c -; \
chmod +x rustup-init; \
./rustup-init -y --no-modify-path --default-toolchain $RUST_VERSION; \
rm rustup-init; \
chmod -R a+w $RUSTUP_HOME $CARGO_HOME; \
rustup --version; \
cargo --version; \
rustc --version; \
rustup target add i686-linux-android; \
rustup target add armv7-linux-androideabi; \
rustup target add aarch64-linux-android
RUN set -eux; \
rustup install beta; \
rustup target add --toolchain beta i686-linux-android; \
rustup target add --toolchain beta armv7-linux-androideabi; \
rustup target add --toolchain beta aarch64-linux-android
#----------------------------------------------------------------------------------------------------------------------
# -- Cleanup ----------------------------------------------------------------------------------------------------------
#----------------------------------------------------------------------------------------------------------------------
WORKDIR /build
RUN apt-get clean

View file

@ -0,0 +1,122 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import datetime
import json
import os
import taskcluster
import re
import subprocess
import sys
"""
Decision task for pull requests
"""
TASK_ID = os.environ.get('TASK_ID')
REPO_URL = os.environ.get('GITHUB_HEAD_REPO_URL')
BRANCH = os.environ.get('GITHUB_HEAD_BRANCH')
COMMIT = os.environ.get('GITHUB_HEAD_SHA')
def fetch_module_names():
process = subprocess.Popen(["./gradlew", "--no-daemon", "printModules"], stdout=subprocess.PIPE,
cwd=os.path.join(os.getcwd(), "sdks", "android", "Mentat"))
(output, err) = process.communicate()
exit_code = process.wait()
if exit_code is not 0:
print "Gradle command returned error:", exit_code
return re.findall('module: (.*)', output, re.M)
def schedule_task(queue, taskId, task):
print "TASK", taskId
print json.dumps(task, indent=4, separators=(',', ': '))
result = queue.createTask(taskId, task)
print "RESULT", taskId
print json.dumps(result, indent=4, separators=(',', ': '))
def create_task(name, description, command):
created = datetime.datetime.now()
expires = taskcluster.fromNow('1 year')
deadline = taskcluster.fromNow('1 day')
return {
"workerType": 'github-worker',
"taskGroupId": TASK_ID,
"expires": taskcluster.stringDate(expires),
"retries": 5,
"created": taskcluster.stringDate(created),
"tags": {},
"priority": "lowest",
"schedulerId": "taskcluster-github",
"deadline": taskcluster.stringDate(deadline),
"dependencies": [ TASK_ID ],
"routes": [],
"scopes": [],
"requires": "all-completed",
"payload": {
"features": {},
"maxRunTime": 7200,
"image": "mozillamobile/mentat:1.2",
"command": [
"/bin/bash",
"--login",
"-cx",
"export TERM=dumb && git fetch %s %s && git config advice.detachedHead false && git checkout %s && cd sdks/android/Mentat && ./gradlew --no-daemon clean %s" % (REPO_URL, BRANCH, COMMIT, command)
],
"artifacts": {},
"deadline": taskcluster.stringDate(deadline)
},
"provisionerId": "aws-provisioner-v1",
"metadata": {
"name": name,
"description": description,
"owner": "nalexander@mozilla.com",
"source": "https://github.com/mozilla/mentat"
}
}
def create_module_task(module):
return create_task(
name='Mentat Android SDK - Module ' + module,
description='Building and testing module ' + module,
command=" ".join(map(lambda x: module + ":" + x, ['assemble', 'test', 'lint'])))
# def create_detekt_task():
# return create_task(
# name='Android Components - detekt',
# description='Running detekt over all modules',
# command='detektCheck')
# def create_ktlint_task():
# return create_task(
# name='Android Components - ktlint',
# description='Running ktlint over all modules',
# command='ktlint')
if __name__ == "__main__":
queue = taskcluster.Queue({ 'baseUrl': 'http://taskcluster/queue/v1' })
modules = fetch_module_names()
if len(modules) == 0:
print "Could not get module names from gradle"
sys.exit(2)
for module in modules:
task = create_module_task(module)
task_id = taskcluster.slugId()
schedule_task(queue, task_id, task)
# schedule_task(queue, taskcluster.slugId(), create_detekt_task())
# schedule_task(queue, taskcluster.slugId(), create_ktlint_task())

View file

@ -0,0 +1,28 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import taskcluster
SECRET_NAME = 'project/mentat/publish'
TASKCLUSTER_BASE_URL = 'http://taskcluster/secrets/v1'
def fetch_publish_secrets(secret_name):
"""Fetch and return secrets from taskcluster's secret service"""
secrets = taskcluster.Secrets({'baseUrl': TASKCLUSTER_BASE_URL})
return secrets.get(secret_name)
def main():
"""Fetch the bintray user and api key from taskcluster's secret service
and save it to local.properties in the project root directory.
"""
data = fetch_publish_secrets(SECRET_NAME)
properties_file_path = os.path.join(os.path.dirname(__file__), '../../../sdks/android/Mentat/local.properties')
with open(properties_file_path, 'w') as properties_file:
properties_file.write("bintray.user=%s\n" % data['secret']['bintray_user'])
properties_file.write("bintray.apikey=%s\n" % data['secret']['bintray_apikey'])
if __name__ == "__main__":
main()

32
build/version.rs Normal file
View file

@ -0,0 +1,32 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use rustc_version::{version, Version};
use std::io::{self, Write};
use std::process::exit;
/// MIN_VERSION should be changed when there's a new minimum version of rustc required
/// to build the project.
static MIN_VERSION: &str = "1.69.0";
fn main() {
let ver = version().unwrap();
let min = Version::parse(MIN_VERSION).unwrap();
if ver < min {
writeln!(
&mut io::stderr(),
"Mentat requires rustc {} or higher, you were using version {}.",
MIN_VERSION,
ver
)
.unwrap();
exit(1);
}
}

23
core-traits/Cargo.toml Normal file
View file

@ -0,0 +1,23 @@
[package]
name = "core_traits"
version = "0.0.2"
workspace = ".."
[lib]
name = "core_traits"
path = "lib.rs"
[dependencies]
chrono = { version = "~0.4", features = ["serde"] }
enum-set = "~0.0.8"
lazy_static = "~1.4"
indexmap = "~1.9"
ordered-float = { version = "~2.8", features = ["serde"] }
uuid = { version = "~1", features = ["v4", "serde"] }
serde = { version = "~1.0", features = ["rc"] }
serde_derive = "~1.0"
bytes = { version = "1.0.1", features = ["serde"] }
[dependencies.edn]
path = "../edn"
features = ["serde_support"]

1109
core-traits/lib.rs Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,181 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use enum_set::EnumSet;
use crate::ValueType;
trait EnumSetExtensions<T: ::enum_set::CLike + Clone> {
/// Return a set containing both `x` and `y`.
fn of_both(x: T, y: T) -> EnumSet<T>;
/// Return a clone of `self` with `y` added.
fn with(&self, y: T) -> EnumSet<T>;
}
impl<T: ::enum_set::CLike + Clone> EnumSetExtensions<T> for EnumSet<T> {
/// Return a set containing both `x` and `y`.
fn of_both(x: T, y: T) -> Self {
let mut o = EnumSet::new();
o.insert(x);
o.insert(y);
o
}
/// Return a clone of `self` with `y` added.
fn with(&self, y: T) -> EnumSet<T> {
let mut o = self.clone();
o.insert(y);
o
}
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct ValueTypeSet(pub EnumSet<ValueType>);
impl Default for ValueTypeSet {
fn default() -> ValueTypeSet {
ValueTypeSet::any()
}
}
impl ValueTypeSet {
pub fn any() -> ValueTypeSet {
ValueTypeSet(ValueType::all_enums())
}
pub fn none() -> ValueTypeSet {
ValueTypeSet(EnumSet::new())
}
/// Return a set containing only `t`.
pub fn of_one(t: ValueType) -> ValueTypeSet {
let mut s = EnumSet::new();
s.insert(t);
ValueTypeSet(s)
}
/// Return a set containing `Double` and `Long`.
pub fn of_numeric_types() -> ValueTypeSet {
ValueTypeSet(EnumSet::of_both(ValueType::Double, ValueType::Long))
}
/// Return a set containing `Double`, `Long`, and `Instant`.
pub fn of_numeric_and_instant_types() -> ValueTypeSet {
let mut s = EnumSet::new();
s.insert(ValueType::Double);
s.insert(ValueType::Long);
s.insert(ValueType::Instant);
ValueTypeSet(s)
}
/// Return a set containing `Ref` and `Keyword`.
pub fn of_keywords() -> ValueTypeSet {
ValueTypeSet(EnumSet::of_both(ValueType::Ref, ValueType::Keyword))
}
/// Return a set containing `Ref` and `Long`.
pub fn of_longs() -> ValueTypeSet {
ValueTypeSet(EnumSet::of_both(ValueType::Ref, ValueType::Long))
}
}
impl ValueTypeSet {
pub fn insert(&mut self, vt: ValueType) -> bool {
self.0.insert(vt)
}
pub fn len(self) -> usize {
self.0.len()
}
/// Returns a set containing all the types in this set and `other`.
pub fn union(self, other: ValueTypeSet) -> ValueTypeSet {
ValueTypeSet(self.0.union(other.0))
}
pub fn intersection(self, other: ValueTypeSet) -> ValueTypeSet {
ValueTypeSet(self.0.intersection(other.0))
}
/// Returns the set difference between `self` and `other`, which is the
/// set of items in `self` that are not in `other`.
pub fn difference(self, other: ValueTypeSet) -> ValueTypeSet {
ValueTypeSet(self.0 - other.0)
}
/// Return an arbitrary type that's part of this set.
/// For a set containing a single type, this will be that type.
pub fn exemplar(self) -> Option<ValueType> {
self.0.iter().next()
}
pub fn is_subset(self, other: ValueTypeSet) -> bool {
self.0.is_subset(&other.0)
}
/// Returns true if `self` and `other` contain no items in common.
pub fn is_disjoint(self, other: ValueTypeSet) -> bool {
self.0.is_disjoint(&other.0)
}
pub fn contains(self, vt: ValueType) -> bool {
self.0.contains(&vt)
}
pub fn is_empty(self) -> bool {
self.0.is_empty()
}
pub fn is_unit(self) -> bool {
self.0.len() == 1
}
pub fn iter(self) -> ::enum_set::Iter<ValueType> {
self.0.iter()
}
}
impl From<ValueType> for ValueTypeSet {
fn from(t: ValueType) -> Self {
ValueTypeSet::of_one(t)
}
}
impl ValueTypeSet {
pub fn is_only_numeric(self) -> bool {
self.is_subset(ValueTypeSet::of_numeric_types())
}
}
impl IntoIterator for ValueTypeSet {
type Item = ValueType;
type IntoIter = ::enum_set::Iter<ValueType>;
fn into_iter(self) -> Self::IntoIter {
self.0.into_iter()
}
}
impl ::std::iter::FromIterator<ValueType> for ValueTypeSet {
fn from_iter<I: IntoIterator<Item = ValueType>>(iterator: I) -> Self {
let mut ret = Self::none();
ret.0.extend(iterator);
ret
}
}
impl ::std::iter::Extend<ValueType> for ValueTypeSet {
fn extend<I: IntoIterator<Item = ValueType>>(&mut self, iter: I) {
for element in iter {
self.0.insert(element);
}
}
}

65
core-traits/values.rs Normal file
View file

@ -0,0 +1,65 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use edn::symbols;
/// Literal `Value` instances in the the "db" namespace.
///
/// Used through-out the transactor to match core DB constructs.
use edn::types::Value;
/// Declare a lazy static `ident` of type `Value::Keyword` with the given `namespace` and
/// `name`.
///
/// It may look surprising to declare a new `lazy_static!` block rather than including
/// invocations inside an existing `lazy_static!` block. The latter cannot be done, since macros
/// will be expanded outside-in. Looking at the `lazy_static!` source suggests that there is no
/// harm in repeating that macro, since internally a multi-`static` block will be expanded into
/// many single-`static` blocks.
///
/// TODO: take just ":db.part/db" and define DB_PART_DB using "db.part" and "db".
macro_rules! lazy_static_namespaced_keyword_value (
($tag:ident, $namespace:expr, $name:expr) => (
lazy_static! {
pub static ref $tag: Value = {
Value::Keyword(symbols::Keyword::namespaced($namespace, $name))
};
}
)
);
lazy_static_namespaced_keyword_value!(DB_ADD, "db", "add");
lazy_static_namespaced_keyword_value!(DB_ALTER_ATTRIBUTE, "db.alter", "attribute");
lazy_static_namespaced_keyword_value!(DB_CARDINALITY, "db", "cardinality");
lazy_static_namespaced_keyword_value!(DB_CARDINALITY_MANY, "db.cardinality", "many");
lazy_static_namespaced_keyword_value!(DB_CARDINALITY_ONE, "db.cardinality", "one");
lazy_static_namespaced_keyword_value!(DB_FULLTEXT, "db", "fulltext");
lazy_static_namespaced_keyword_value!(DB_IDENT, "db", "ident");
lazy_static_namespaced_keyword_value!(DB_INDEX, "db", "index");
lazy_static_namespaced_keyword_value!(DB_INSTALL_ATTRIBUTE, "db.install", "attribute");
lazy_static_namespaced_keyword_value!(DB_IS_COMPONENT, "db", "isComponent");
lazy_static_namespaced_keyword_value!(DB_NO_HISTORY, "db", "noHistory");
lazy_static_namespaced_keyword_value!(DB_PART_DB, "db.part", "db");
lazy_static_namespaced_keyword_value!(DB_RETRACT, "db", "retract");
lazy_static_namespaced_keyword_value!(DB_TYPE_BOOLEAN, "db.type", "boolean");
lazy_static_namespaced_keyword_value!(DB_TYPE_DOUBLE, "db.type", "double");
lazy_static_namespaced_keyword_value!(DB_TYPE_INSTANT, "db.type", "instant");
lazy_static_namespaced_keyword_value!(DB_TYPE_KEYWORD, "db.type", "keyword");
lazy_static_namespaced_keyword_value!(DB_TYPE_LONG, "db.type", "long");
lazy_static_namespaced_keyword_value!(DB_TYPE_REF, "db.type", "ref");
lazy_static_namespaced_keyword_value!(DB_TYPE_STRING, "db.type", "string");
lazy_static_namespaced_keyword_value!(DB_TYPE_URI, "db.type", "uri");
lazy_static_namespaced_keyword_value!(DB_TYPE_UUID, "db.type", "uuid");
lazy_static_namespaced_keyword_value!(DB_TYPE_BYTES, "db.type", "bytes");
lazy_static_namespaced_keyword_value!(DB_UNIQUE, "db", "unique");
lazy_static_namespaced_keyword_value!(DB_UNIQUE_IDENTITY, "db.unique", "identity");
lazy_static_namespaced_keyword_value!(DB_UNIQUE_VALUE, "db.unique", "value");
lazy_static_namespaced_keyword_value!(DB_VALUE_TYPE, "db", "valueType");

19
core/Cargo.toml Normal file
View file

@ -0,0 +1,19 @@
[package]
name = "mentat_core"
version = "0.0.2"
workspace = ".."
[dependencies]
chrono = { version = "~0.4", features = ["serde"] }
enum-set = "~0.0"
failure = "~0.1"
indexmap = "~1.9"
ordered-float = { version = "~2.8", features = ["serde"] }
uuid = { version = "~1", features = ["v4", "serde"] }
[dependencies.core_traits]
path = "../core-traits"
[dependencies.edn]
path = "../edn"
features = ["serde_support"]

49
core/src/cache.rs Normal file
View file

@ -0,0 +1,49 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
/// Cache traits.
use std::collections::BTreeSet;
use core_traits::{Entid, TypedValue};
use crate::Schema;
pub trait CachedAttributes {
fn is_attribute_cached_reverse(&self, entid: Entid) -> bool;
fn is_attribute_cached_forward(&self, entid: Entid) -> bool;
fn has_cached_attributes(&self) -> bool;
fn get_values_for_entid(
&self,
schema: &Schema,
attribute: Entid,
entid: Entid,
) -> Option<&Vec<TypedValue>>;
fn get_value_for_entid(
&self,
schema: &Schema,
attribute: Entid,
entid: Entid,
) -> Option<&TypedValue>;
/// Reverse lookup.
fn get_entid_for_value(&self, attribute: Entid, value: &TypedValue) -> Option<Entid>;
fn get_entids_for_value(
&self,
attribute: Entid,
value: &TypedValue,
) -> Option<&BTreeSet<Entid>>;
}
pub trait UpdateableCache<E> {
fn update<I>(&mut self, schema: &Schema, retractions: I, assertions: I) -> Result<(), E>
where
I: Iterator<Item = (Entid, Entid, TypedValue)>;
}

65
core/src/counter.rs Normal file
View file

@ -0,0 +1,65 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use std::cell::Cell;
use std::rc::Rc;
#[derive(Clone, Default)]
pub struct RcCounter {
c: Rc<Cell<usize>>,
}
/// A simple shared counter.
impl RcCounter {
pub fn with_initial(value: usize) -> Self {
RcCounter {
c: Rc::new(Cell::new(value)),
}
}
pub fn new() -> Self {
RcCounter {
c: Rc::new(Cell::new(0)),
}
}
/// Return the next value in the sequence.
///
/// ```
/// use mentat_core::counter::RcCounter;
///
/// let c = RcCounter::with_initial(3);
/// assert_eq!(c.next(), 3);
/// assert_eq!(c.next(), 4);
/// let d = c.clone();
/// assert_eq!(d.next(), 5);
/// assert_eq!(c.next(), 6);
/// ```
pub fn next(&self) -> usize {
let current = self.c.get();
self.c.replace(current + 1)
}
}
#[cfg(test)]
mod tests {
use super::RcCounter;
#[test]
fn test_rc_counter() {
let c = RcCounter::new();
assert_eq!(c.next(), 0);
assert_eq!(c.next(), 1);
let d = c.clone();
assert_eq!(d.next(), 2);
assert_eq!(c.next(), 3);
}
}

345
core/src/lib.rs Normal file
View file

@ -0,0 +1,345 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
extern crate chrono;
extern crate enum_set;
extern crate failure;
extern crate indexmap;
extern crate ordered_float;
extern crate uuid;
extern crate core_traits;
extern crate edn;
use core_traits::{Attribute, Entid, KnownEntid, ValueType};
mod cache;
use std::collections::BTreeMap;
pub use uuid::Uuid;
pub use chrono::{
DateTime,
Timelike, // For truncation.
};
pub use edn::parse::parse_query;
pub use edn::{Cloned, FromMicros, FromRc, Keyword, ToMicros, Utc, ValueRc};
pub use crate::cache::{CachedAttributes, UpdateableCache};
mod sql_types;
mod tx_report;
/// Core types defining a Mentat knowledge base.
mod types;
pub use crate::tx_report::TxReport;
pub use crate::types::ValueTypeTag;
pub use crate::sql_types::{SQLTypeAffinity, SQLValueType, SQLValueTypeSet};
/// Map `Keyword` idents (`:db/ident`) to positive integer entids (`1`).
pub type IdentMap = BTreeMap<Keyword, Entid>;
/// Map positive integer entids (`1`) to `Keyword` idents (`:db/ident`).
pub type EntidMap = BTreeMap<Entid, Keyword>;
/// Map attribute entids to `Attribute` instances.
pub type AttributeMap = BTreeMap<Entid, Attribute>;
/// Represents a Mentat schema.
///
/// Maintains the mapping between string idents and positive integer entids; and exposes the schema
/// flags associated to a given entid (equivalently, ident).
///
/// TODO: consider a single bi-directional map instead of separate ident->entid and entid->ident
/// maps.
#[derive(Clone, Debug, Default, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct Schema {
/// Map entid->ident.
///
/// Invariant: is the inverse map of `ident_map`.
pub entid_map: EntidMap,
/// Map ident->entid.
///
/// Invariant: is the inverse map of `entid_map`.
pub ident_map: IdentMap,
/// Map entid->attribute flags.
///
/// Invariant: key-set is the same as the key-set of `entid_map` (equivalently, the value-set of
/// `ident_map`).
pub attribute_map: AttributeMap,
/// Maintain a vec of unique attribute IDs for which the corresponding attribute in `attribute_map`
/// has `.component == true`.
pub component_attributes: Vec<Entid>,
}
pub trait HasSchema {
fn entid_for_type(&self, t: ValueType) -> Option<KnownEntid>;
fn get_ident<T>(&self, x: T) -> Option<&Keyword>
where
T: Into<Entid>;
fn get_entid(&self, x: &Keyword) -> Option<KnownEntid>;
fn attribute_for_entid<T>(&self, x: T) -> Option<&Attribute>
where
T: Into<Entid>;
// Returns the attribute and the entid named by the provided ident.
fn attribute_for_ident(&self, ident: &Keyword) -> Option<(&Attribute, KnownEntid)>;
/// Return true if the provided entid identifies an attribute in this schema.
fn is_attribute<T>(&self, x: T) -> bool
where
T: Into<Entid>;
/// Return true if the provided ident identifies an attribute in this schema.
fn identifies_attribute(&self, x: &Keyword) -> bool;
fn component_attributes(&self) -> &[Entid];
}
impl Schema {
pub fn new(ident_map: IdentMap, entid_map: EntidMap, attribute_map: AttributeMap) -> Schema {
let mut s = Schema {
ident_map,
entid_map,
attribute_map,
component_attributes: Vec::new(),
};
s.update_component_attributes();
s
}
/// Returns an symbolic representation of the schema suitable for applying across Mentat stores.
pub fn to_edn_value(&self) -> edn::Value {
edn::Value::Vector(
(&self.attribute_map)
.iter()
.map(|(entid, attribute)| attribute.to_edn_value(self.get_ident(*entid).cloned()))
.collect(),
)
}
fn get_raw_entid(&self, x: &Keyword) -> Option<Entid> {
self.ident_map.get(x).copied()
}
pub fn update_component_attributes(&mut self) {
let mut components: Vec<Entid>;
components = self
.attribute_map
.iter()
.filter_map(|(k, v)| if v.component { Some(*k) } else { None })
.collect();
components.sort_unstable();
self.component_attributes = components;
}
}
impl HasSchema for Schema {
fn entid_for_type(&self, t: ValueType) -> Option<KnownEntid> {
// TODO: this can be made more efficient.
self.get_entid(&t.into_keyword())
}
fn get_ident<T>(&self, x: T) -> Option<&Keyword>
where
T: Into<Entid>,
{
self.entid_map.get(&x.into())
}
fn get_entid(&self, x: &Keyword) -> Option<KnownEntid> {
self.get_raw_entid(x).map(KnownEntid)
}
fn attribute_for_entid<T>(&self, x: T) -> Option<&Attribute>
where
T: Into<Entid>,
{
self.attribute_map.get(&x.into())
}
fn attribute_for_ident(&self, ident: &Keyword) -> Option<(&Attribute, KnownEntid)> {
self.get_raw_entid(&ident).and_then(|entid| {
self.attribute_for_entid(entid)
.map(|a| (a, KnownEntid(entid)))
})
}
/// Return true if the provided entid identifies an attribute in this schema.
fn is_attribute<T>(&self, x: T) -> bool
where
T: Into<Entid>,
{
self.attribute_map.contains_key(&x.into())
}
/// Return true if the provided ident identifies an attribute in this schema.
fn identifies_attribute(&self, x: &Keyword) -> bool {
self.get_raw_entid(x)
.map(|e| self.is_attribute(e))
.unwrap_or(false)
}
fn component_attributes(&self) -> &[Entid] {
&self.component_attributes
}
}
pub mod counter;
pub mod util;
/// A helper macro to sequentially process an iterable sequence,
/// evaluating a block between each pair of items.
///
/// This is used to simply and efficiently produce output like
///
/// ```sql
/// 1, 2, 3
/// ```
///
/// or
///
/// ```sql
/// x = 1 AND y = 2
/// ```
///
/// without producing an intermediate string sequence.
#[macro_export]
macro_rules! interpose {
( $name: pat, $across: expr, $body: block, $inter: block ) => {
interpose_iter!($name, $across.iter(), $body, $inter)
};
}
/// A helper to bind `name` to values in `across`, running `body` for each value,
/// and running `inter` between each value. See `interpose` for examples.
#[macro_export]
macro_rules! interpose_iter {
( $name: pat, $across: expr, $body: block, $inter: block ) => {
let mut seq = $across;
if let Some($name) = seq.next() {
$body;
for $name in seq {
$inter;
$body;
}
}
};
}
#[cfg(test)]
mod test {
use super::*;
use std::str::FromStr;
use core_traits::{attribute, TypedValue};
fn associate_ident(schema: &mut Schema, i: Keyword, e: Entid) {
schema.entid_map.insert(e, i.clone());
schema.ident_map.insert(i, e);
}
fn add_attribute(schema: &mut Schema, e: Entid, a: Attribute) {
schema.attribute_map.insert(e, a);
}
#[test]
fn test_datetime_truncation() {
let dt: DateTime<Utc> =
DateTime::from_str("2018-01-11T00:34:09.273457004Z").expect("parsed");
let expected: DateTime<Utc> =
DateTime::from_str("2018-01-11T00:34:09.273457Z").expect("parsed");
let tv: TypedValue = dt.into();
if let TypedValue::Instant(roundtripped) = tv {
assert_eq!(roundtripped, expected);
} else {
panic!();
}
}
#[test]
fn test_as_edn_value() {
let mut schema = Schema::default();
let attr1 = Attribute {
index: true,
value_type: ValueType::Ref,
fulltext: false,
unique: None,
multival: false,
component: false,
no_history: true,
};
associate_ident(&mut schema, Keyword::namespaced("foo", "bar"), 97);
add_attribute(&mut schema, 97, attr1);
let attr2 = Attribute {
index: false,
value_type: ValueType::String,
fulltext: true,
unique: Some(attribute::Unique::Value),
multival: true,
component: false,
no_history: false,
};
associate_ident(&mut schema, Keyword::namespaced("foo", "bas"), 98);
add_attribute(&mut schema, 98, attr2);
let attr3 = Attribute {
index: false,
value_type: ValueType::Boolean,
fulltext: false,
unique: Some(attribute::Unique::Identity),
multival: false,
component: true,
no_history: false,
};
associate_ident(&mut schema, Keyword::namespaced("foo", "bat"), 99);
add_attribute(&mut schema, 99, attr3);
let value = schema.to_edn_value();
let expected_output = r#"[ { :db/ident :foo/bar
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/index true
:db/noHistory true },
{ :db/ident :foo/bas
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many
:db/unique :db.unique/value
:db/fulltext true },
{ :db/ident :foo/bat
:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/isComponent true }, ]"#;
let expected_value = edn::parse::value(&expected_output)
.expect("to be able to parse")
.without_spans();
assert_eq!(expected_value, value);
// let's compare the whole thing again, just to make sure we are not changing anything when we convert to edn.
let value2 = schema.to_edn_value();
assert_eq!(expected_value, value2);
}
}

140
core/src/sql_types.rs Normal file
View file

@ -0,0 +1,140 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use std::collections::BTreeSet;
use core_traits::{ValueType, ValueTypeSet};
use crate::types::ValueTypeTag;
/// Type safe representation of the possible return values from SQLite's `typeof`
#[derive(Clone, Copy, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum SQLTypeAffinity {
Null, // "null"
Integer, // "integer"
Real, // "real"
Text, // "text"
Blob, // "blob"
}
// Put this here rather than in `db` simply because it's widely needed.
pub trait SQLValueType {
fn value_type_tag(&self) -> ValueTypeTag;
fn accommodates_integer(&self, int: i64) -> bool;
/// Return a pair of the ValueTypeTag for this value type, and the SQLTypeAffinity required
/// to distinguish it from any other types that share the same tag.
///
/// Background: The tag alone is not enough to determine the type of a value, since multiple
/// ValueTypes may share the same tag (for example, ValueType::Long and ValueType::Double).
/// However, each ValueType can be determined by checking both the tag and the type's affinity.
fn sql_representation(&self) -> (ValueTypeTag, Option<SQLTypeAffinity>);
}
impl SQLValueType for ValueType {
fn sql_representation(&self) -> (ValueTypeTag, Option<SQLTypeAffinity>) {
match *self {
ValueType::Ref => (0, None),
ValueType::Boolean => (1, None),
ValueType::Instant => (4, None),
// SQLite distinguishes integral from decimal types, allowing long and double to share a tag.
ValueType::Long => (5, Some(SQLTypeAffinity::Integer)),
ValueType::Double => (5, Some(SQLTypeAffinity::Real)),
ValueType::String => (10, None),
ValueType::Uuid => (11, None),
ValueType::Keyword => (13, None),
ValueType::Bytes => (15, Some(SQLTypeAffinity::Blob)),
}
}
#[inline]
fn value_type_tag(&self) -> ValueTypeTag {
self.sql_representation().0
}
/// Returns true if the provided integer is in the SQLite value space of this type. For
/// example, `1` is how we encode `true`.
fn accommodates_integer(&self, int: i64) -> bool {
use crate::ValueType::*;
match *self {
Instant => false, // Always use #inst.
Long | Double => true,
Ref => int >= 0,
Boolean => (int == 0) || (int == 1),
ValueType::String => false,
Keyword => false,
Uuid => false,
Bytes => false,
}
}
}
/// We have an enum of types, `ValueType`. It can be collected into a set, `ValueTypeSet`. Each type
/// is associated with a type tag, which is how a type is represented in, e.g., SQL storage. Types
/// can share type tags, because backing SQL storage is able to differentiate between some types
/// (e.g., longs and doubles), and so distinct tags aren't necessary. That association is defined by
/// `SQLValueType`. That trait similarly extends to `ValueTypeSet`, which maps a collection of types
/// into a collection of tags.
pub trait SQLValueTypeSet {
fn value_type_tags(&self) -> BTreeSet<ValueTypeTag>;
fn has_unique_type_tag(&self) -> bool;
fn unique_type_tag(&self) -> Option<ValueTypeTag>;
}
impl SQLValueTypeSet for ValueTypeSet {
// This is inefficient, but it'll do for now.
fn value_type_tags(&self) -> BTreeSet<ValueTypeTag> {
let mut out = BTreeSet::new();
for t in self.0.iter() {
out.insert(t.value_type_tag());
}
out
}
fn unique_type_tag(&self) -> Option<ValueTypeTag> {
if self.is_unit() || self.has_unique_type_tag() {
self.exemplar().map(|t| t.value_type_tag())
} else {
None
}
}
fn has_unique_type_tag(&self) -> bool {
if self.is_unit() {
return true;
}
let mut acc = BTreeSet::new();
for t in self.0.iter() {
if acc.insert(t.value_type_tag()) && acc.len() > 1 {
// We inserted a second or subsequent value.
return false;
}
}
!acc.is_empty()
}
}
#[cfg(test)]
mod tests {
use crate::sql_types::SQLValueType;
use core_traits::ValueType;
#[test]
fn test_accommodates_integer() {
assert!(!ValueType::Instant.accommodates_integer(1493399581314));
assert!(!ValueType::Instant.accommodates_integer(1493399581314000));
assert!(ValueType::Boolean.accommodates_integer(1));
assert!(!ValueType::Boolean.accommodates_integer(-1));
assert!(!ValueType::Boolean.accommodates_integer(10));
assert!(!ValueType::String.accommodates_integer(10));
}
}

34
core/src/tx_report.rs Normal file
View file

@ -0,0 +1,34 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use std::collections::BTreeMap;
use core_traits::Entid;
use crate::{DateTime, Utc};
/// A transaction report summarizes an applied transaction.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct TxReport {
/// The transaction ID of the transaction.
pub tx_id: Entid,
/// The timestamp when the transaction began to be committed.
pub tx_instant: DateTime<Utc>,
/// A map from string literal tempid to resolved or allocated entid.
///
/// Every string literal tempid presented to the transactor either resolves via upsert to an
/// existing entid, or is allocated a new entid. (It is possible for multiple distinct string
/// literal tempids to all unify to a single freshly allocated entid.)
pub tempids: BTreeMap<String, Entid>,
}

11
core/src/types.rs Normal file
View file

@ -0,0 +1,11 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
pub type ValueTypeTag = i32;

90
core/src/util.rs Normal file
View file

@ -0,0 +1,90 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
/// Side-effect chaining on `Result`.
pub trait ResultEffect<T> {
/// Invoke `f` if `self` is `Ok`, returning `self`.
fn when_ok<F: FnOnce()>(self, f: F) -> Self;
/// Invoke `f` if `self` is `Err`, returning `self`.
fn when_err<F: FnOnce()>(self, f: F) -> Self;
}
impl<T, E> ResultEffect<T> for Result<T, E> {
fn when_ok<F: FnOnce()>(self, f: F) -> Self {
if self.is_ok() {
f();
}
self
}
fn when_err<F: FnOnce()>(self, f: F) -> Self {
if self.is_err() {
f();
}
self
}
}
/// Side-effect chaining on `Option`.
pub trait OptionEffect<T> {
/// Invoke `f` if `self` is `None`, returning `self`.
fn when_none<F: FnOnce()>(self, f: F) -> Self;
/// Invoke `f` if `self` is `Some`, returning `self`.
fn when_some<F: FnOnce()>(self, f: F) -> Self;
}
impl<T> OptionEffect<T> for Option<T> {
fn when_none<F: FnOnce()>(self, f: F) -> Self {
if self.is_none() {
f();
}
self
}
fn when_some<F: FnOnce()>(self, f: F) -> Self {
if self.is_some() {
f();
}
self
}
}
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum Either<L, R> {
Left(L),
Right(R),
}
// Cribbed from https://github.com/bluss/either/blob/f793721f3fdeb694f009e731b23a2858286bc0d6/src/lib.rs#L219-L259.
impl<L, R> Either<L, R> {
pub fn map_left<F, M>(self, f: F) -> Either<M, R>
where
F: FnOnce(L) -> M,
{
use self::Either::*;
match self {
Left(l) => Left(f(l)),
Right(r) => Right(r),
}
}
pub fn map_right<F, S>(self, f: F) -> Either<L, S>
where
F: FnOnce(R) -> S,
{
use self::Either::*;
match self {
Left(l) => Left(l),
Right(r) => Right(f(r)),
}
}
}

25
db-traits/Cargo.toml Normal file
View file

@ -0,0 +1,25 @@
[package]
name = "db_traits"
version = "0.0.2"
workspace = ".."
[lib]
name = "db_traits"
path = "lib.rs"
[features]
sqlcipher = ["rusqlite/sqlcipher"]
[dependencies]
failure = "~0.1"
failure_derive = "~0.1"
[dependencies.edn]
path = "../edn"
[dependencies.core_traits]
path = "../core-traits"
[dependencies.rusqlite]
version = "~0.29"
features = ["limits", "bundled"]

300
db-traits/errors.rs Normal file
View file

@ -0,0 +1,300 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use failure::{Backtrace, Context, Fail};
use std::collections::{BTreeMap, BTreeSet};
use rusqlite;
use edn::entities::TempId;
use core_traits::{Entid, KnownEntid, TypedValue, ValueType};
pub type Result<T> = ::std::result::Result<T, DbError>;
// TODO Error/ErrorKind pair
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum CardinalityConflict {
/// A cardinality one attribute has multiple assertions `[e a v1], [e a v2], ...`.
CardinalityOneAddConflict {
e: Entid,
a: Entid,
vs: BTreeSet<TypedValue>,
},
/// A datom has been both asserted and retracted, like `[:db/add e a v]` and `[:db/retract e a v]`.
AddRetractConflict {
e: Entid,
a: Entid,
vs: BTreeSet<TypedValue>,
},
}
// TODO Error/ErrorKind pair
#[derive(Clone, Debug, Eq, PartialEq, Fail)]
pub enum SchemaConstraintViolation {
/// A transaction tried to assert datoms where one tempid upserts to two (or more) distinct
/// entids.
ConflictingUpserts {
/// A map from tempid to the entids it would upsert to.
///
/// In the future, we might even be able to attribute the upserts to particular (reduced)
/// datoms, i.e., to particular `[e a v]` triples that caused the constraint violation.
/// Attributing constraint violations to input data is more difficult to the multiple
/// rewriting passes the input undergoes.
conflicting_upserts: BTreeMap<TempId, BTreeSet<KnownEntid>>,
},
/// A transaction tried to assert a datom or datoms with the wrong value `v` type(s).
TypeDisagreements {
/// The key (`[e a v]`) has an invalid value `v`: it is not of the expected value type.
conflicting_datoms: BTreeMap<(Entid, Entid, TypedValue), ValueType>,
},
/// A transaction tried to assert datoms that don't observe the schema's cardinality constraints.
CardinalityConflicts { conflicts: Vec<CardinalityConflict> },
}
impl ::std::fmt::Display for SchemaConstraintViolation {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
use self::SchemaConstraintViolation::*;
match self {
ConflictingUpserts {
ref conflicting_upserts,
} => {
writeln!(f, "conflicting upserts:")?;
for (tempid, entids) in conflicting_upserts {
writeln!(f, " tempid {:?} upserts to {:?}", tempid, entids)?;
}
Ok(())
}
TypeDisagreements {
ref conflicting_datoms,
} => {
writeln!(f, "type disagreements:")?;
for (ref datom, expected_type) in conflicting_datoms {
writeln!(
f,
" expected value of type {} but got datom [{} {} {:?}]",
expected_type, datom.0, datom.1, datom.2
)?;
}
Ok(())
}
CardinalityConflicts { ref conflicts } => {
writeln!(f, "cardinality conflicts:")?;
for conflict in conflicts {
writeln!(f, " {:?}", conflict)?;
}
Ok(())
}
}
}
}
#[derive(Copy, Clone, Eq, PartialEq, Debug, Fail)]
pub enum InputError {
/// Map notation included a bad `:db/id` value.
BadDbId,
/// A value place cannot be interpreted as an entity place (for example, in nested map
/// notation).
BadEntityPlace,
}
impl ::std::fmt::Display for InputError {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
use self::InputError::*;
match self {
BadDbId => {
writeln!(f, ":db/id in map notation must either not be present or be an entid, an ident, or a tempid")
}
BadEntityPlace => {
writeln!(f, "cannot convert value place into entity place")
}
}
}
}
#[derive(Debug)]
pub struct DbError {
inner: Context<DbErrorKind>,
}
impl ::std::fmt::Display for DbError {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
::std::fmt::Display::fmt(&self.inner, f)
}
}
impl Fail for DbError {
fn cause(&self) -> Option<&dyn Fail> {
self.inner.cause()
}
fn backtrace(&self) -> Option<&Backtrace> {
self.inner.backtrace()
}
}
impl DbError {
pub fn kind(&self) -> DbErrorKind {
self.inner.get_context().clone()
}
}
impl From<DbErrorKind> for DbError {
fn from(kind: DbErrorKind) -> Self {
DbError {
inner: Context::new(kind),
}
}
}
impl From<Context<DbErrorKind>> for DbError {
fn from(inner: Context<DbErrorKind>) -> Self {
DbError { inner }
}
}
impl From<rusqlite::Error> for DbError {
fn from(error: rusqlite::Error) -> Self {
DbError {
inner: Context::new(DbErrorKind::RusqliteError(error.to_string())),
}
}
}
#[derive(Clone, PartialEq, Debug, Fail)]
pub enum DbErrorKind {
/// We're just not done yet. Recognized a feature that is not yet implemented.
#[fail(display = "not yet implemented: {}", _0)]
NotYetImplemented(String),
/// We've been given a value that isn't the correct Mentat type.
#[fail(
display = "value '{}' is not the expected Mentat value type {:?}",
_0, _1
)]
BadValuePair(String, ValueType),
/// We've got corrupt data in the SQL store: a value and value_type_tag don't line up.
/// TODO _1.data_type()
#[fail(display = "bad SQL (value_type_tag, value) pair: ({:?}, {:?})", _0, _1)]
BadSQLValuePair(rusqlite::types::Value, i32),
/// The SQLite store user_version isn't recognized. This could be an old version of Mentat
/// trying to open a newer version SQLite store; or it could be a corrupt file; or ...
/// #[fail(display = "bad SQL store user_version: {}", _0)]
/// BadSQLiteStoreVersion(i32),
/// A bootstrap definition couldn't be parsed or installed. This is a programmer error, not
/// a runtime error.
#[fail(display = "bad bootstrap definition: {}", _0)]
BadBootstrapDefinition(String),
/// A schema assertion couldn't be parsed.
#[fail(display = "bad schema assertion: {}", _0)]
BadSchemaAssertion(String),
/// An ident->entid mapping failed.
#[fail(display = "no entid found for ident: {}", _0)]
UnrecognizedIdent(String),
/// An entid->ident mapping failed.
#[fail(display = "no ident found for entid: {}", _0)]
UnrecognizedEntid(Entid),
/// Tried to transact an entid that isn't allocated.
#[fail(display = "entid not allocated: {}", _0)]
UnallocatedEntid(Entid),
#[fail(display = "unknown attribute for entid: {}", _0)]
UnknownAttribute(Entid),
#[fail(display = "cannot reverse-cache non-unique attribute: {}", _0)]
CannotCacheNonUniqueAttributeInReverse(Entid),
#[fail(display = "schema alteration failed: {}", _0)]
SchemaAlterationFailed(String),
/// A transaction tried to violate a constraint of the schema of the Mentat store.
#[fail(display = "schema constraint violation: {}", _0)]
SchemaConstraintViolation(SchemaConstraintViolation),
/// The transaction was malformed in some way (that was not recognized at parse time; for
/// example, in a way that is schema-dependent).
#[fail(display = "transaction input error: {}", _0)]
InputError(InputError),
#[fail(
display = "Cannot transact a fulltext assertion with a typed value that is not :db/valueType :db.type/string"
)]
WrongTypeValueForFtsAssertion,
// SQL errors.
#[fail(display = "could not update a cache")]
CacheUpdateFailed,
#[fail(display = "Could not set_user_version")]
CouldNotSetVersionPragma,
#[fail(display = "Could not get_user_version")]
CouldNotGetVersionPragma,
#[fail(display = "Could not search!")]
CouldNotSearch,
#[fail(display = "Could not insert transaction: failed to add datoms not already present")]
TxInsertFailedToAddMissingDatoms,
#[fail(display = "Could not insert transaction: failed to retract datoms already present")]
TxInsertFailedToRetractDatoms,
#[fail(display = "Could not update datoms: failed to retract datoms already present")]
DatomsUpdateFailedToRetract,
#[fail(display = "Could not update datoms: failed to add datoms not already present")]
DatomsUpdateFailedToAdd,
#[fail(display = "Failed to create temporary tables")]
FailedToCreateTempTables,
#[fail(display = "Could not insert non-fts one statements into temporary search table!")]
NonFtsInsertionIntoTempSearchTableFailed,
#[fail(display = "Could not insert fts values into fts table!")]
FtsInsertionFailed,
#[fail(display = "Could not insert FTS statements into temporary search table!")]
FtsInsertionIntoTempSearchTableFailed,
#[fail(display = "Could not drop FTS search ids!")]
FtsFailedToDropSearchIds,
#[fail(display = "Could not update partition map")]
FailedToUpdatePartitionMap,
#[fail(display = "Can't operate over mixed timelines")]
TimelinesMixed,
#[fail(display = "Can't move transactions to a non-empty timeline")]
TimelinesMoveToNonEmpty,
#[fail(display = "Supplied an invalid transaction range")]
TimelinesInvalidRange,
// It would be better to capture the underlying `rusqlite::Error`, but that type doesn't
// implement many useful traits, including `Clone`, `Eq`, and `PartialEq`.
#[fail(display = "SQL error: {}", _0)]
RusqliteError(String),
}

18
db-traits/lib.rs Normal file
View file

@ -0,0 +1,18 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
extern crate failure;
extern crate failure_derive;
extern crate rusqlite;
extern crate core_traits;
extern crate edn;
pub mod errors;

49
db/Cargo.toml Normal file
View file

@ -0,0 +1,49 @@
[package]
name = "mentat_db"
version = "0.0.2"
workspace = ".."
[features]
default = []
sqlcipher = ["rusqlite/sqlcipher"]
syncable = ["serde", "serde_json", "serde_derive"]
[dependencies]
failure = "~0.1"
indexmap = "~1.9"
itertools = "~0.10"
lazy_static = "~1.4"
log = "~0.4"
ordered-float = "~2.8"
time = "~0.3"
petgraph = "~0.6"
serde = { version = "~1.0", optional = true }
serde_json = { version = "~1.0", optional = true }
serde_derive = { version = "~1.0", optional = true }
[dependencies.rusqlite]
version = "~0.29"
features = ["limits", "bundled"]
[dependencies.edn]
path = "../edn"
[dependencies.mentat_core]
path = "../core"
[dependencies.core_traits]
path = "../core-traits"
[dependencies.db_traits]
path = "../db-traits"
[dependencies.mentat_sql]
path = "../sql"
# TODO: This should be in dev-dependencies.
[dependencies.tabwriter]
version = "~1.2"
[dev-dependencies]
env_logger = "0.9"
#tabwriter = { version = "1.2.1" }

3
db/README.md Normal file
View file

@ -0,0 +1,3 @@
This sub-crate implements the SQLite database layer: installing,
managing, and migrating forward the SQL schema underlying the datom
store.

View file

@ -0,0 +1,89 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use std::collections::BTreeMap;
/// Witness assertions and retractions, folding (assertion, retraction) pairs into alterations.
/// Assumes that no assertion or retraction will be witnessed more than once.
///
/// This keeps track of when we see a :db/add, a :db/retract, or both :db/add and :db/retract in
/// some order.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct AddRetractAlterSet<K, V> {
pub asserted: BTreeMap<K, V>,
pub retracted: BTreeMap<K, V>,
pub altered: BTreeMap<K, (V, V)>,
}
impl<K, V> Default for AddRetractAlterSet<K, V>
where
K: Ord,
{
fn default() -> AddRetractAlterSet<K, V> {
AddRetractAlterSet {
asserted: BTreeMap::default(),
retracted: BTreeMap::default(),
altered: BTreeMap::default(),
}
}
}
impl<K, V> AddRetractAlterSet<K, V>
where
K: Ord,
{
pub fn witness(&mut self, key: K, value: V, added: bool) {
if added {
if let Some(retracted_value) = self.retracted.remove(&key) {
self.altered.insert(key, (retracted_value, value));
} else {
self.asserted.insert(key, value);
}
} else if let Some(asserted_value) = self.asserted.remove(&key) {
self.altered.insert(key, (value, asserted_value));
} else {
self.retracted.insert(key, value);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test() {
let mut set: AddRetractAlterSet<i64, char> = AddRetractAlterSet::default();
// Assertion.
set.witness(1, 'a', true);
// Retraction.
set.witness(2, 'b', false);
// Alteration.
set.witness(3, 'c', true);
set.witness(3, 'd', false);
// Alteration, witnessed in the with the retraction before the assertion.
set.witness(4, 'e', false);
set.witness(4, 'f', true);
let mut asserted = BTreeMap::default();
asserted.insert(1, 'a');
let mut retracted = BTreeMap::default();
retracted.insert(2, 'b');
let mut altered = BTreeMap::default();
altered.insert(3, ('d', 'c'));
altered.insert(4, ('e', 'f'));
assert_eq!(set.asserted, asserted);
assert_eq!(set.retracted, retracted);
assert_eq!(set.altered, altered);
}
}

382
db/src/bootstrap.rs Normal file
View file

@ -0,0 +1,382 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use crate::db::TypedSQLValue;
use crate::entids;
use db_traits::errors::{DbErrorKind, Result};
use edn;
use edn::entities::Entity;
use edn::symbols;
use edn::types::Value;
use core_traits::{values, TypedValue};
use crate::schema::SchemaBuilding;
use crate::types::{Partition, PartitionMap};
use mentat_core::{IdentMap, Schema};
/// The first transaction ID applied to the knowledge base.
///
/// This is the start of the :db.part/tx partition.
pub const TX0: i64 = 0x1000_0000;
/// This is the start of the :db.part/user partition.
pub const USER0: i64 = 0x10000;
// Corresponds to the version of the :db.schema/core vocabulary.
pub const CORE_SCHEMA_VERSION: u32 = 1;
lazy_static! {
static ref V1_IDENTS: [(symbols::Keyword, i64); 40] = {
[
(ns_keyword!("db", "ident"), entids::DB_IDENT),
(ns_keyword!("db.part", "db"), entids::DB_PART_DB),
(ns_keyword!("db", "txInstant"), entids::DB_TX_INSTANT),
(
ns_keyword!("db.install", "partition"),
entids::DB_INSTALL_PARTITION,
),
(
ns_keyword!("db.install", "valueType"),
entids::DB_INSTALL_VALUE_TYPE,
),
(
ns_keyword!("db.install", "attribute"),
entids::DB_INSTALL_ATTRIBUTE,
),
(ns_keyword!("db", "valueType"), entids::DB_VALUE_TYPE),
(ns_keyword!("db", "cardinality"), entids::DB_CARDINALITY),
(ns_keyword!("db", "unique"), entids::DB_UNIQUE),
(ns_keyword!("db", "isComponent"), entids::DB_IS_COMPONENT),
(ns_keyword!("db", "index"), entids::DB_INDEX),
(ns_keyword!("db", "fulltext"), entids::DB_FULLTEXT),
(ns_keyword!("db", "noHistory"), entids::DB_NO_HISTORY),
(ns_keyword!("db", "add"), entids::DB_ADD),
(ns_keyword!("db", "retract"), entids::DB_RETRACT),
(ns_keyword!("db.part", "user"), entids::DB_PART_USER),
(ns_keyword!("db.part", "tx"), entids::DB_PART_TX),
(ns_keyword!("db", "excise"), entids::DB_EXCISE),
(ns_keyword!("db.excise", "attrs"), entids::DB_EXCISE_ATTRS),
(
ns_keyword!("db.excise", "beforeT"),
entids::DB_EXCISE_BEFORE_T,
),
(ns_keyword!("db.excise", "before"), entids::DB_EXCISE_BEFORE),
(
ns_keyword!("db.alter", "attribute"),
entids::DB_ALTER_ATTRIBUTE,
),
(ns_keyword!("db.type", "ref"), entids::DB_TYPE_REF),
(ns_keyword!("db.type", "keyword"), entids::DB_TYPE_KEYWORD),
(ns_keyword!("db.type", "long"), entids::DB_TYPE_LONG),
(ns_keyword!("db.type", "double"), entids::DB_TYPE_DOUBLE),
(ns_keyword!("db.type", "string"), entids::DB_TYPE_STRING),
(ns_keyword!("db.type", "uuid"), entids::DB_TYPE_UUID),
(ns_keyword!("db.type", "uri"), entids::DB_TYPE_URI),
(ns_keyword!("db.type", "boolean"), entids::DB_TYPE_BOOLEAN),
(ns_keyword!("db.type", "instant"), entids::DB_TYPE_INSTANT),
(ns_keyword!("db.type", "bytes"), entids::DB_TYPE_BYTES),
(
ns_keyword!("db.cardinality", "one"),
entids::DB_CARDINALITY_ONE,
),
(
ns_keyword!("db.cardinality", "many"),
entids::DB_CARDINALITY_MANY,
),
(ns_keyword!("db.unique", "value"), entids::DB_UNIQUE_VALUE),
(
ns_keyword!("db.unique", "identity"),
entids::DB_UNIQUE_IDENTITY,
),
(ns_keyword!("db", "doc"), entids::DB_DOC),
(
ns_keyword!("db.schema", "version"),
entids::DB_SCHEMA_VERSION,
),
(
ns_keyword!("db.schema", "attribute"),
entids::DB_SCHEMA_ATTRIBUTE,
),
(ns_keyword!("db.schema", "core"), entids::DB_SCHEMA_CORE),
]
};
pub static ref V1_PARTS: [(symbols::Keyword, i64, i64, i64, bool); 3] = {
[
(
ns_keyword!("db.part", "db"),
0,
USER0 - 1,
(1 + V1_IDENTS.len()) as i64,
false,
),
(ns_keyword!("db.part", "user"), USER0, TX0 - 1, USER0, true),
(
ns_keyword!("db.part", "tx"),
TX0,
i64::max_value(),
TX0,
false,
),
]
};
static ref V1_CORE_SCHEMA: [symbols::Keyword; 16] = {
[
(ns_keyword!("db", "ident")),
(ns_keyword!("db.install", "partition")),
(ns_keyword!("db.install", "valueType")),
(ns_keyword!("db.install", "attribute")),
(ns_keyword!("db", "txInstant")),
(ns_keyword!("db", "valueType")),
(ns_keyword!("db", "cardinality")),
(ns_keyword!("db", "doc")),
(ns_keyword!("db", "unique")),
(ns_keyword!("db", "isComponent")),
(ns_keyword!("db", "index")),
(ns_keyword!("db", "fulltext")),
(ns_keyword!("db", "noHistory")),
(ns_keyword!("db.alter", "attribute")),
(ns_keyword!("db.schema", "version")),
(ns_keyword!("db.schema", "attribute")),
]
};
static ref V1_SYMBOLIC_SCHEMA: Value = {
let s = r#"
{:db/ident {:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one
:db/index true
:db/unique :db.unique/identity}
:db.install/partition {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
:db.install/valueType {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
:db.install/attribute {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
;; TODO: support user-specified functions in the future.
;; :db.install/function {:db/valueType :db.type/ref
;; :db/cardinality :db.cardinality/many}
:db/txInstant {:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one
:db/index true}
:db/valueType {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
:db/cardinality {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
:db/doc {:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
:db/unique {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
:db/isComponent {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db/index {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db/fulltext {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db/noHistory {:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
:db.alter/attribute {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
:db.schema/version {:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
;; unique-value because an attribute can only belong to a single
;; schema fragment.
:db.schema/attribute {:db/valueType :db.type/ref
:db/index true
:db/unique :db.unique/value
:db/cardinality :db.cardinality/many}}"#;
edn::parse::value(s)
.map(|v| v.without_spans())
.map_err(|_| {
DbErrorKind::BadBootstrapDefinition("Unable to parse V1_SYMBOLIC_SCHEMA".into())
})
.unwrap()
};
}
/// Convert (ident, entid) pairs into [:db/add IDENT :db/ident IDENT] `Value` instances.
fn idents_to_assertions(idents: &[(symbols::Keyword, i64)]) -> Vec<Value> {
idents
.iter()
.map(|&(ref ident, _)| {
let value = Value::Keyword(ident.clone());
Value::Vector(vec![
values::DB_ADD.clone(),
value.clone(),
values::DB_IDENT.clone(),
value,
])
})
.collect()
}
/// Convert an ident list into [:db/add :db.schema/core :db.schema/attribute IDENT] `Value` instances.
fn schema_attrs_to_assertions(version: u32, idents: &[symbols::Keyword]) -> Vec<Value> {
let schema_core = Value::Keyword(ns_keyword!("db.schema", "core"));
let schema_attr = Value::Keyword(ns_keyword!("db.schema", "attribute"));
let schema_version = Value::Keyword(ns_keyword!("db.schema", "version"));
idents
.iter()
.map(|ident| {
let value = Value::Keyword(ident.clone());
Value::Vector(vec![
values::DB_ADD.clone(),
schema_core.clone(),
schema_attr.clone(),
value,
])
})
.chain(::std::iter::once(Value::Vector(vec![
values::DB_ADD.clone(),
schema_core.clone(),
schema_version,
Value::Integer(version as i64),
])))
.collect()
}
/// Convert {:ident {:key :value ...} ...} to
/// vec![(symbols::Keyword(:ident), symbols::Keyword(:key), TypedValue(:value)), ...].
///
/// Such triples are closer to what the transactor will produce when processing attribute
/// assertions.
fn symbolic_schema_to_triples(
ident_map: &IdentMap,
symbolic_schema: &Value,
) -> Result<Vec<(symbols::Keyword, symbols::Keyword, TypedValue)>> {
// Failure here is a coding error, not a runtime error.
let mut triples: Vec<(symbols::Keyword, symbols::Keyword, TypedValue)> = vec![];
// TODO: Consider `flat_map` and `map` rather than loop.
match *symbolic_schema {
Value::Map(ref m) => {
for (ident, mp) in m {
let ident = match ident {
Value::Keyword(ref ident) => ident,
_ => bail!(DbErrorKind::BadBootstrapDefinition(format!(
"Expected namespaced keyword for ident but got '{:?}'",
ident
))),
};
match *mp {
Value::Map(ref mpp) => {
for (attr, value) in mpp {
let attr = match attr {
Value::Keyword(ref attr) => attr,
_ => bail!(DbErrorKind::BadBootstrapDefinition(format!(
"Expected namespaced keyword for attr but got '{:?}'",
attr
))),
};
// We have symbolic idents but the transactor handles entids. Ad-hoc
// convert right here. This is a fundamental limitation on the
// bootstrap symbolic schema format; we can't represent "real" keywords
// at this time.
//
// TODO: remove this limitation, perhaps by including a type tag in the
// bootstrap symbolic schema, or by representing the initial bootstrap
// schema directly as Rust data.
let typed_value = match TypedValue::from_edn_value(value) {
Some(TypedValue::Keyword(ref k)) => ident_map
.get(k)
.map(|entid| TypedValue::Ref(*entid))
.ok_or_else(|| DbErrorKind::UnrecognizedIdent(k.to_string()))?,
Some(v) => v,
_ => bail!(DbErrorKind::BadBootstrapDefinition(format!(
"Expected Mentat typed value for value but got '{:?}'",
value
))),
};
triples.push((ident.clone(), attr.clone(), typed_value));
}
}
_ => bail!(DbErrorKind::BadBootstrapDefinition(
"Expected {:db/ident {:db/attr value ...} ...}".into()
)),
}
}
}
_ => bail!(DbErrorKind::BadBootstrapDefinition("Expected {...}".into())),
}
Ok(triples)
}
/// Convert {IDENT {:key :value ...} ...} to [[:db/add IDENT :key :value] ...].
fn symbolic_schema_to_assertions(symbolic_schema: &Value) -> Result<Vec<Value>> {
// Failure here is a coding error, not a runtime error.
let mut assertions: Vec<Value> = vec![];
match *symbolic_schema {
Value::Map(ref m) => {
for (ident, mp) in m {
match *mp {
Value::Map(ref mpp) => {
for (attr, value) in mpp {
assertions.push(Value::Vector(vec![
values::DB_ADD.clone(),
ident.clone(),
attr.clone(),
value.clone(),
]));
}
}
_ => bail!(DbErrorKind::BadBootstrapDefinition(
"Expected {:db/ident {:db/attr value ...} ...}".into()
)),
}
}
}
_ => bail!(DbErrorKind::BadBootstrapDefinition("Expected {...}".into())),
}
Ok(assertions)
}
pub(crate) fn bootstrap_partition_map() -> PartitionMap {
V1_PARTS
.iter()
.map(|&(ref part, start, end, index, allow_excision)| {
(
part.to_string(),
Partition::new(start, end, index, allow_excision),
)
})
.collect()
}
pub(crate) fn bootstrap_ident_map() -> IdentMap {
V1_IDENTS
.iter()
.map(|&(ref ident, entid)| (ident.clone(), entid))
.collect()
}
pub(crate) fn bootstrap_schema() -> Schema {
let ident_map = bootstrap_ident_map();
let bootstrap_triples =
symbolic_schema_to_triples(&ident_map, &V1_SYMBOLIC_SCHEMA).expect("symbolic schema");
Schema::from_ident_map_and_triples(ident_map, bootstrap_triples).unwrap()
}
pub(crate) fn bootstrap_entities() -> Vec<Entity<edn::ValueAndSpan>> {
let bootstrap_assertions: Value = Value::Vector(
[
symbolic_schema_to_assertions(&V1_SYMBOLIC_SCHEMA).expect("symbolic schema"),
idents_to_assertions(&V1_IDENTS[..]),
schema_attrs_to_assertions(CORE_SCHEMA_VERSION, V1_CORE_SCHEMA.as_ref()),
]
.concat(),
);
// Failure here is a coding error (since the inputs are fixed), not a runtime error.
// TODO: represent these bootstrap entity data errors rather than just panicing.
edn::parse::entities(&bootstrap_assertions.to_string()).expect("bootstrap assertions")
}

2017
db/src/cache.rs Normal file

File diff suppressed because it is too large Load diff

3398
db/src/db.rs Normal file

File diff suppressed because it is too large Load diff

544
db/src/debug.rs Normal file
View file

@ -0,0 +1,544 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
#![allow(unused_macros)]
/// Low-level functions for testing.
// Macro to parse a `Borrow<str>` to an `edn::Value` and assert the given `edn::Value` `matches`
// against it.
//
// This is a macro only to give nice line numbers when tests fail.
#[macro_export]
macro_rules! assert_matches {
( $input: expr, $expected: expr ) => {{
// Failure to parse the expected pattern is a coding error, so we unwrap.
let pattern_value = edn::parse::value($expected.borrow())
.expect(format!("to be able to parse expected {}", $expected).as_str())
.without_spans();
let input_value = $input.to_edn();
assert!(
input_value.matches(&pattern_value),
"Expected value:\n{}\nto match pattern:\n{}\n",
input_value.to_pretty(120).unwrap(),
pattern_value.to_pretty(120).unwrap()
);
}};
}
// Transact $input against the given $conn, expecting success or a `Result<TxReport, String>`.
//
// This unwraps safely and makes asserting errors pleasant.
#[macro_export]
macro_rules! assert_transact {
( $conn: expr, $input: expr, $expected: expr ) => {{
trace!("assert_transact: {}", $input);
let result = $conn.transact($input).map_err(|e| e.to_string());
assert_eq!(result, $expected.map_err(|e| e.to_string()));
}};
( $conn: expr, $input: expr ) => {{
trace!("assert_transact: {}", $input);
let result = $conn.transact($input);
assert!(
result.is_ok(),
"Expected Ok(_), got `{}`",
result.unwrap_err()
);
result.unwrap()
}};
}
use std::borrow::Borrow;
use std::collections::BTreeMap;
use std::io::Write;
use itertools::Itertools;
use rusqlite;
use rusqlite::types::ToSql;
use rusqlite::TransactionBehavior;
use tabwriter::TabWriter;
use crate::bootstrap;
use crate::db::*;
use crate::db::{read_attribute_map, read_ident_map};
use crate::entids;
use db_traits::errors::Result;
use edn;
use core_traits::{Entid, TypedValue, ValueType};
use crate::internal_types::TermWithTempIds;
use crate::schema::SchemaBuilding;
use crate::tx::{transact, transact_terms};
use crate::types::*;
use crate::watcher::NullWatcher;
use edn::entities::{EntidOrIdent, TempId};
use edn::InternSet;
use mentat_core::{HasSchema, SQLValueType, TxReport};
/// Represents a *datom* (assertion) in the store.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct Datom {
// TODO: generalize this.
pub e: EntidOrIdent,
pub a: EntidOrIdent,
pub v: edn::Value,
pub tx: i64,
pub added: Option<bool>,
}
/// Represents a set of datoms (assertions) in the store.
///
/// To make comparision easier, we deterministically order. The ordering is the ascending tuple
/// ordering determined by `(e, a, (value_type_tag, v), tx)`, where `value_type_tag` is an internal
/// value that is not exposed but is deterministic.
pub struct Datoms(pub Vec<Datom>);
/// Represents an ordered sequence of transactions in the store.
///
/// To make comparision easier, we deterministically order. The ordering is the ascending tuple
/// ordering determined by `(e, a, (value_type_tag, v), tx, added)`, where `value_type_tag` is an
/// internal value that is not exposed but is deterministic, and `added` is ordered such that
/// retracted assertions appear before added assertions.
pub struct Transactions(pub Vec<Datoms>);
/// Represents the fulltext values in the store.
pub struct FulltextValues(pub Vec<(i64, String)>);
impl Datom {
pub fn to_edn(&self) -> edn::Value {
let f = |entid: &EntidOrIdent| -> edn::Value {
match *entid {
EntidOrIdent::Entid(ref y) => edn::Value::Integer(*y),
EntidOrIdent::Ident(ref y) => edn::Value::Keyword(y.clone()),
}
};
let mut v = vec![f(&self.e), f(&self.a), self.v.clone()];
if let Some(added) = self.added {
v.push(edn::Value::Integer(self.tx));
v.push(edn::Value::Boolean(added));
}
edn::Value::Vector(v)
}
}
impl Datoms {
pub fn to_edn(&self) -> edn::Value {
edn::Value::Vector((&self.0).iter().map(|x| x.to_edn()).collect())
}
}
impl Transactions {
pub fn to_edn(&self) -> edn::Value {
edn::Value::Vector((&self.0).iter().map(|x| x.to_edn()).collect())
}
}
impl FulltextValues {
pub fn to_edn(&self) -> edn::Value {
edn::Value::Vector(
(&self.0)
.iter()
.map(|&(x, ref y)| {
edn::Value::Vector(vec![edn::Value::Integer(x), edn::Value::Text(y.clone())])
})
.collect(),
)
}
}
/// Turn TypedValue::Ref into TypedValue::Keyword when it is possible.
trait ToIdent {
fn map_ident(self, schema: &Schema) -> Self;
}
impl ToIdent for TypedValue {
fn map_ident(self, schema: &Schema) -> Self {
if let TypedValue::Ref(e) = self {
schema
.get_ident(e)
.cloned()
.map(|i| i.into())
.unwrap_or(TypedValue::Ref(e))
} else {
self
}
}
}
/// Convert a numeric entid to an ident `Entid` if possible, otherwise a numeric `Entid`.
pub fn to_entid(schema: &Schema, entid: i64) -> EntidOrIdent {
schema
.get_ident(entid)
.map_or(EntidOrIdent::Entid(entid), |ident| {
EntidOrIdent::Ident(ident.clone())
})
}
// /// Convert a symbolic ident to an ident `Entid` if possible, otherwise a numeric `Entid`.
// pub fn to_ident(schema: &Schema, entid: i64) -> Entid {
// schema.get_ident(entid).map_or(Entid::Entid(entid), |ident| Entid::Ident(ident.clone()))
// }
/// Return the set of datoms in the store, ordered by (e, a, v, tx), but not including any datoms of
/// the form [... :db/txInstant ...].
pub fn datoms<S: Borrow<Schema>>(conn: &rusqlite::Connection, schema: &S) -> Result<Datoms> {
datoms_after(conn, schema, bootstrap::TX0 - 1)
}
/// Return the set of datoms in the store with transaction ID strictly greater than the given `tx`,
/// ordered by (e, a, v, tx).
///
/// The datom set returned does not include any datoms of the form [... :db/txInstant ...].
pub fn datoms_after<S: Borrow<Schema>>(
conn: &rusqlite::Connection,
schema: &S,
tx: i64,
) -> Result<Datoms> {
let borrowed_schema = schema.borrow();
let mut stmt: rusqlite::Statement = conn.prepare("SELECT e, a, v, value_type_tag, tx FROM datoms WHERE tx > ? ORDER BY e ASC, a ASC, value_type_tag ASC, v ASC, tx ASC")?;
let r: Result<Vec<_>> = stmt
.query_and_then(&[&tx], |row| {
let e: i64 = row.get(0)?;
let a: i64 = row.get(1)?;
if a == entids::DB_TX_INSTANT {
return Ok(None);
}
let v: rusqlite::types::Value = row.get(2)?;
let value_type_tag: i32 = row.get(3)?;
let attribute = borrowed_schema.require_attribute_for_entid(a)?;
let value_type_tag = if !attribute.fulltext {
value_type_tag
} else {
ValueType::Long.value_type_tag()
};
let typed_value =
TypedValue::from_sql_value_pair(v, value_type_tag)?.map_ident(borrowed_schema);
let (value, _) = typed_value.to_edn_value_pair();
let tx: i64 = row.get(4)?;
Ok(Some(Datom {
e: EntidOrIdent::Entid(e),
a: to_entid(borrowed_schema, a),
v: value,
tx,
added: None,
}))
})?
.collect();
Ok(Datoms(r?.into_iter().filter_map(|x| x).collect()))
}
/// Return the sequence of transactions in the store with transaction ID strictly greater than the
/// given `tx`, ordered by (tx, e, a, v).
///
/// Each transaction returned includes the [(transaction-tx) :db/txInstant ...] datom.
pub fn transactions_after<S: Borrow<Schema>>(
conn: &rusqlite::Connection,
schema: &S,
tx: i64,
) -> Result<Transactions> {
let borrowed_schema = schema.borrow();
let mut stmt: rusqlite::Statement = conn.prepare("SELECT e, a, v, value_type_tag, tx, added FROM transactions WHERE tx > ? ORDER BY tx ASC, e ASC, a ASC, value_type_tag ASC, v ASC, added ASC")?;
let r: Result<Vec<_>> = stmt
.query_and_then(&[&tx], |row| {
let e: i64 = row.get(0)?;
let a: i64 = row.get(1)?;
let v: rusqlite::types::Value = row.get(2)?;
let value_type_tag: i32 = row.get(3)?;
let attribute = borrowed_schema.require_attribute_for_entid(a)?;
let value_type_tag = if !attribute.fulltext {
value_type_tag
} else {
ValueType::Long.value_type_tag()
};
let typed_value =
TypedValue::from_sql_value_pair(v, value_type_tag)?.map_ident(borrowed_schema);
let (value, _) = typed_value.to_edn_value_pair();
let tx: i64 = row.get(4)?;
let added: bool = row.get(5)?;
Ok(Datom {
e: EntidOrIdent::Entid(e),
a: to_entid(borrowed_schema, a),
v: value,
tx,
added: Some(added),
})
})?
.collect();
// Group by tx.
let r: Vec<Datoms> = r?
.into_iter()
.group_by(|x| x.tx)
.into_iter()
.map(|(_key, group)| Datoms(group.collect()))
.collect();
Ok(Transactions(r))
}
/// Return the set of fulltext values in the store, ordered by rowid.
pub fn fulltext_values(conn: &rusqlite::Connection) -> Result<FulltextValues> {
let mut stmt: rusqlite::Statement =
conn.prepare("SELECT rowid, text FROM fulltext_values ORDER BY rowid")?;
let r: Result<Vec<_>> = stmt
.query_and_then([], |row| {
let rowid: i64 = row.get(0)?;
let text: String = row.get(1)?;
Ok((rowid, text))
})?
.collect();
r.map(FulltextValues)
}
/// Execute the given `sql` query with the given `params` and format the results as a
/// tab-and-newline formatted string suitable for debug printing.
///
/// The query is printed followed by a newline, then the returned columns followed by a newline, and
/// then the data rows and columns. All columns are aligned.
pub fn dump_sql_query(
conn: &rusqlite::Connection,
sql: &str,
params: &[&dyn ToSql],
) -> Result<String> {
let mut stmt: rusqlite::Statement = conn.prepare(sql)?;
let mut tw = TabWriter::new(Vec::new()).padding(2);
writeln!(&mut tw, "{}", sql).unwrap();
for column_name in stmt.column_names() {
write!(&mut tw, "{}\t", column_name).unwrap();
}
writeln!(&mut tw).unwrap();
let r: Result<Vec<_>> = stmt
.query_and_then(params, |row| {
for i in 0..row.as_ref().column_count() {
let value: rusqlite::types::Value = row.get(i)?;
write!(&mut tw, "{:?}\t", value).unwrap();
}
writeln!(&mut tw).unwrap();
Ok(())
})?
.collect();
r?;
let dump = String::from_utf8(tw.into_inner().unwrap()).unwrap();
Ok(dump)
}
// A connection that doesn't try to be clever about possibly sharing its `Schema`. Compare to
// `mentat::Conn`.
pub struct TestConn {
pub sqlite: rusqlite::Connection,
pub partition_map: PartitionMap,
pub schema: Schema,
}
impl TestConn {
fn assert_materialized_views(&self) {
let materialized_ident_map = read_ident_map(&self.sqlite).expect("ident map");
let materialized_attribute_map = read_attribute_map(&self.sqlite).expect("schema map");
let materialized_schema = Schema::from_ident_map_and_attribute_map(
materialized_ident_map,
materialized_attribute_map,
)
.expect("schema");
assert_eq!(materialized_schema, self.schema);
}
pub fn transact<I>(&mut self, transaction: I) -> Result<TxReport>
where
I: Borrow<str>,
{
// Failure to parse the transaction is a coding error, so we unwrap.
let entities = edn::parse::entities(transaction.borrow()).unwrap_or_else(|_| {
panic!("to be able to parse {} into entities", transaction.borrow())
});
let details = {
// The block scopes the borrow of self.sqlite.
// We're about to write, so go straight ahead and get an IMMEDIATE transaction.
let tx = self
.sqlite
.transaction_with_behavior(TransactionBehavior::Immediate)?;
// Applying the transaction can fail, so we don't unwrap.
let details = transact(
&tx,
self.partition_map.clone(),
&self.schema,
&self.schema,
NullWatcher(),
entities,
)?;
tx.commit()?;
details
};
let (report, next_partition_map, next_schema, _watcher) = details;
self.partition_map = next_partition_map;
if let Some(next_schema) = next_schema {
self.schema = next_schema;
}
// Verify that we've updated the materialized views during transacting.
self.assert_materialized_views();
Ok(report)
}
pub fn transact_simple_terms<I>(
&mut self,
terms: I,
tempid_set: InternSet<TempId>,
) -> Result<TxReport>
where
I: IntoIterator<Item = TermWithTempIds>,
{
let details = {
// The block scopes the borrow of self.sqlite.
// We're about to write, so go straight ahead and get an IMMEDIATE transaction.
let tx = self
.sqlite
.transaction_with_behavior(TransactionBehavior::Immediate)?;
// Applying the transaction can fail, so we don't unwrap.
let details = transact_terms(
&tx,
self.partition_map.clone(),
&self.schema,
&self.schema,
NullWatcher(),
terms,
tempid_set,
)?;
tx.commit()?;
details
};
let (report, next_partition_map, next_schema, _watcher) = details;
self.partition_map = next_partition_map;
if let Some(next_schema) = next_schema {
self.schema = next_schema;
}
// Verify that we've updated the materialized views during transacting.
self.assert_materialized_views();
Ok(report)
}
pub fn last_tx_id(&self) -> Entid {
self.partition_map
.get(&":db.part/tx".to_string())
.unwrap()
.next_entid()
- 1
}
pub fn last_transaction(&self) -> Datoms {
transactions_after(&self.sqlite, &self.schema, self.last_tx_id() - 1)
.expect("last_transaction")
.0
.pop()
.unwrap()
}
pub fn transactions(&self) -> Transactions {
transactions_after(&self.sqlite, &self.schema, bootstrap::TX0).expect("transactions")
}
pub fn datoms(&self) -> Datoms {
datoms_after(&self.sqlite, &self.schema, bootstrap::TX0).expect("datoms")
}
pub fn fulltext_values(&self) -> FulltextValues {
fulltext_values(&self.sqlite).expect("fulltext_values")
}
pub fn with_sqlite(mut conn: rusqlite::Connection) -> TestConn {
let db = ensure_current_version(&mut conn).unwrap();
// Does not include :db/txInstant.
let datoms = datoms_after(&conn, &db.schema, 0).unwrap();
assert_eq!(datoms.0.len(), 94);
// Includes :db/txInstant.
let transactions = transactions_after(&conn, &db.schema, 0).unwrap();
assert_eq!(transactions.0.len(), 1);
assert_eq!(transactions.0[0].0.len(), 95);
let mut parts = db.partition_map;
// Add a fake partition to allow tests to do things like
// [:db/add 111 :foo/bar 222]
{
let fake_partition = Partition::new(100, 2000, 1000, true);
parts.insert(":db.part/fake".into(), fake_partition);
}
let test_conn = TestConn {
sqlite: conn,
partition_map: parts,
schema: db.schema,
};
// Verify that we've created the materialized views during bootstrapping.
test_conn.assert_materialized_views();
test_conn
}
pub fn sanitized_partition_map(&mut self) {
self.partition_map.remove(":db.part/fake");
}
}
impl Default for TestConn {
fn default() -> TestConn {
TestConn::with_sqlite(new_connection("").expect("Couldn't open in-memory db"))
}
}
pub struct TempIds(edn::Value);
impl TempIds {
pub fn to_edn(&self) -> edn::Value {
self.0.clone()
}
}
pub fn tempids(report: &TxReport) -> TempIds {
let mut map: BTreeMap<edn::Value, edn::Value> = BTreeMap::default();
for (tempid, &entid) in report.tempids.iter() {
map.insert(edn::Value::Text(tempid.clone()), edn::Value::Integer(entid));
}
TempIds(edn::Value::Map(map))
}

123
db/src/entids.rs Normal file
View file

@ -0,0 +1,123 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
/// Literal `Entid` values in the the "db" namespace.
///
/// Used through-out the transactor to match core DB constructs.
use core_traits::Entid;
// Added in SQL schema v1.
pub const DB_IDENT: Entid = 1;
pub const DB_PART_DB: Entid = 2;
pub const DB_TX_INSTANT: Entid = 3;
pub const DB_INSTALL_PARTITION: Entid = 4;
pub const DB_INSTALL_VALUE_TYPE: Entid = 5;
pub const DB_INSTALL_ATTRIBUTE: Entid = 6;
pub const DB_VALUE_TYPE: Entid = 7;
pub const DB_CARDINALITY: Entid = 8;
pub const DB_UNIQUE: Entid = 9;
pub const DB_IS_COMPONENT: Entid = 10;
pub const DB_INDEX: Entid = 11;
pub const DB_FULLTEXT: Entid = 12;
pub const DB_NO_HISTORY: Entid = 13;
pub const DB_ADD: Entid = 14;
pub const DB_RETRACT: Entid = 15;
pub const DB_PART_USER: Entid = 16;
pub const DB_PART_TX: Entid = 17;
pub const DB_EXCISE: Entid = 18;
pub const DB_EXCISE_ATTRS: Entid = 19;
pub const DB_EXCISE_BEFORE_T: Entid = 20;
pub const DB_EXCISE_BEFORE: Entid = 21;
pub const DB_ALTER_ATTRIBUTE: Entid = 22;
pub const DB_TYPE_REF: Entid = 23;
pub const DB_TYPE_KEYWORD: Entid = 24;
pub const DB_TYPE_LONG: Entid = 25;
pub const DB_TYPE_DOUBLE: Entid = 26;
pub const DB_TYPE_STRING: Entid = 27;
pub const DB_TYPE_UUID: Entid = 28;
pub const DB_TYPE_URI: Entid = 29;
pub const DB_TYPE_BOOLEAN: Entid = 30;
pub const DB_TYPE_INSTANT: Entid = 31;
pub const DB_TYPE_BYTES: Entid = 32;
pub const DB_CARDINALITY_ONE: Entid = 33;
pub const DB_CARDINALITY_MANY: Entid = 34;
pub const DB_UNIQUE_VALUE: Entid = 35;
pub const DB_UNIQUE_IDENTITY: Entid = 36;
pub const DB_DOC: Entid = 37;
pub const DB_SCHEMA_VERSION: Entid = 38;
pub const DB_SCHEMA_ATTRIBUTE: Entid = 39;
pub const DB_SCHEMA_CORE: Entid = 40;
/// Return `false` if the given attribute will not change the metadata: recognized idents, schema,
/// partitions in the partition map.
pub fn might_update_metadata(attribute: Entid) -> bool {
if attribute >= DB_DOC {
return false;
}
matches!(
attribute,
// Idents.
DB_IDENT |
// Schema.
DB_CARDINALITY |
DB_FULLTEXT |
DB_INDEX |
DB_IS_COMPONENT |
DB_UNIQUE |
DB_VALUE_TYPE
)
}
/// Return 'false' if the given attribute might be used to describe a schema attribute.
pub fn is_a_schema_attribute(attribute: Entid) -> bool {
matches!(
attribute,
DB_IDENT
| DB_CARDINALITY
| DB_FULLTEXT
| DB_INDEX
| DB_IS_COMPONENT
| DB_UNIQUE
| DB_VALUE_TYPE
)
}
lazy_static! {
/// Attributes that are "ident related". These might change the "idents" materialized view.
pub static ref IDENTS_SQL_LIST: String = {
format!("({})",
DB_IDENT)
};
/// Attributes that are "schema related". These might change the "schema" materialized view.
pub static ref SCHEMA_SQL_LIST: String = {
format!("({}, {}, {}, {}, {}, {})",
DB_CARDINALITY,
DB_FULLTEXT,
DB_INDEX,
DB_IS_COMPONENT,
DB_UNIQUE,
DB_VALUE_TYPE)
};
/// Attributes that are "metadata" related. These might change one of the materialized views.
pub static ref METADATA_SQL_LIST: String = {
format!("({}, {}, {}, {}, {}, {}, {})",
DB_CARDINALITY,
DB_FULLTEXT,
DB_IDENT,
DB_INDEX,
DB_IS_COMPONENT,
DB_UNIQUE,
DB_VALUE_TYPE)
};
}

221
db/src/internal_types.rs Normal file
View file

@ -0,0 +1,221 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
//! Types used only within the transactor. These should not be exposed outside of this crate.
use std::collections::{BTreeMap, BTreeSet, HashMap};
use core_traits::{Attribute, Entid, KnownEntid, TypedValue, ValueType};
use mentat_core::util::Either;
use edn;
use edn::entities;
use edn::entities::{EntityPlace, OpType, TempId, TxFunction};
use edn::{SpannedValue, ValueAndSpan, ValueRc};
use crate::schema::SchemaTypeChecking;
use crate::types::{AVMap, AVPair, Schema, TransactableValue};
use db_traits::errors;
use db_traits::errors::{DbErrorKind, Result};
impl TransactableValue for ValueAndSpan {
fn into_typed_value(self, schema: &Schema, value_type: ValueType) -> Result<TypedValue> {
schema.to_typed_value(&self, value_type)
}
fn into_entity_place(self) -> Result<EntityPlace<Self>> {
use self::SpannedValue::*;
match self.inner {
Integer(v) => Ok(EntityPlace::Entid(entities::EntidOrIdent::Entid(v))),
Keyword(v) => {
if v.is_namespaced() {
Ok(EntityPlace::Entid(entities::EntidOrIdent::Ident(v)))
} else {
// We only allow namespaced idents.
bail!(DbErrorKind::InputError(errors::InputError::BadEntityPlace))
}
}
Text(v) => Ok(EntityPlace::TempId(TempId::External(v).into())),
List(ls) => {
let mut it = ls.iter();
match (it.next().map(|x| &x.inner), it.next(), it.next(), it.next()) {
// Like "(transaction-id)".
(Some(&PlainSymbol(ref op)), None, None, None) => {
Ok(EntityPlace::TxFunction(TxFunction { op: op.clone() }))
}
// Like "(lookup-ref)".
(Some(&PlainSymbol(edn::PlainSymbol(ref s))), Some(a), Some(v), None)
if s == "lookup-ref" =>
{
match a.clone().into_entity_place()? {
EntityPlace::Entid(a) => {
Ok(EntityPlace::LookupRef(entities::LookupRef {
a: entities::AttributePlace::Entid(a),
v: v.clone(),
}))
}
EntityPlace::TempId(_)
| EntityPlace::TxFunction(_)
| EntityPlace::LookupRef(_) => {
bail!(DbErrorKind::InputError(errors::InputError::BadEntityPlace))
}
}
}
_ => bail!(DbErrorKind::InputError(errors::InputError::BadEntityPlace)),
}
}
Nil | Boolean(_) | Instant(_) | BigInteger(_) | Float(_) | Uuid(_) | PlainSymbol(_)
| NamespacedSymbol(_) | Vector(_) | Set(_) | Map(_) | Bytes(_) => {
bail!(DbErrorKind::InputError(errors::InputError::BadEntityPlace))
}
}
}
fn as_tempid(&self) -> Option<TempId> {
self.inner.as_text().cloned().map(TempId::External)
}
}
impl TransactableValue for TypedValue {
fn into_typed_value(self, _schema: &Schema, value_type: ValueType) -> Result<TypedValue> {
if self.value_type() != value_type {
bail!(DbErrorKind::BadValuePair(format!("{:?}", self), value_type));
}
Ok(self)
}
fn into_entity_place(self) -> Result<EntityPlace<Self>> {
match self {
TypedValue::Ref(x) => Ok(EntityPlace::Entid(entities::EntidOrIdent::Entid(x))),
TypedValue::Keyword(x) => Ok(EntityPlace::Entid(entities::EntidOrIdent::Ident(
(*x).clone(),
))),
TypedValue::String(x) => Ok(EntityPlace::TempId(TempId::External((*x).clone()).into())),
TypedValue::Boolean(_)
| TypedValue::Long(_)
| TypedValue::Double(_)
| TypedValue::Instant(_)
| TypedValue::Uuid(_)
| TypedValue::Bytes(_) => {
bail!(DbErrorKind::InputError(errors::InputError::BadEntityPlace))
}
}
}
fn as_tempid(&self) -> Option<TempId> {
match self {
TypedValue::String(ref s) => Some(TempId::External((**s).clone())),
_ => None,
}
}
}
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum Term<E, V> {
AddOrRetract(OpType, E, Entid, V),
}
use self::Either::*;
pub type KnownEntidOr<T> = Either<KnownEntid, T>;
pub type TypedValueOr<T> = Either<TypedValue, T>;
pub type TempIdHandle = ValueRc<TempId>;
pub type TempIdMap = HashMap<TempIdHandle, KnownEntid>;
pub type LookupRef = ValueRc<AVPair>;
/// Internal representation of an entid on its way to resolution. We either have the simple case (a
/// numeric entid), a lookup-ref that still needs to be resolved (an atomized [a v] pair), or a temp
/// ID that needs to be upserted or allocated (an atomized tempid).
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum LookupRefOrTempId {
LookupRef(LookupRef),
TempId(TempIdHandle),
}
pub type TermWithTempIdsAndLookupRefs =
Term<KnownEntidOr<LookupRefOrTempId>, TypedValueOr<LookupRefOrTempId>>;
pub type TermWithTempIds = Term<KnownEntidOr<TempIdHandle>, TypedValueOr<TempIdHandle>>;
pub type TermWithoutTempIds = Term<KnownEntid, TypedValue>;
pub type Population = Vec<TermWithTempIds>;
impl TermWithTempIds {
// These have no tempids by definition, and just need to be unwrapped. This operation might
// also be called "lowering" or "level lowering", but the concept of "unwrapping" is common in
// Rust and seems appropriate here.
pub(crate) fn unwrap(self) -> TermWithoutTempIds {
match self {
Term::AddOrRetract(op, Left(n), a, Left(v)) => Term::AddOrRetract(op, n, a, v),
_ => unreachable!(),
}
}
}
impl TermWithoutTempIds {
pub(crate) fn rewrap<A, B>(self) -> Term<KnownEntidOr<A>, TypedValueOr<B>> {
match self {
Term::AddOrRetract(op, n, a, v) => Term::AddOrRetract(op, Left(n), a, Left(v)),
}
}
}
/// Given a `KnownEntidOr` or a `TypedValueOr`, replace any internal `LookupRef` with the entid from
/// the given map. Fail if any `LookupRef` cannot be replaced.
///
/// `lift` allows to specify how the entid found is mapped into the output type. (This could
/// also be an `Into` or `From` requirement.)
///
/// The reason for this awkward expression is that we're parameterizing over the _type constructor_
/// (`EntidOr` or `TypedValueOr`), which is not trivial to express in Rust. This only works because
/// they're both the same `Result<...>` type with different parameterizations.
pub fn replace_lookup_ref<T, U>(
lookup_map: &AVMap,
desired_or: Either<T, LookupRefOrTempId>,
lift: U,
) -> errors::Result<Either<T, TempIdHandle>>
where
U: FnOnce(Entid) -> T,
{
match desired_or {
Left(desired) => Ok(Left(desired)), // N.b., must unwrap here -- the ::Left types are different!
Right(other) => {
match other {
LookupRefOrTempId::TempId(t) => Ok(Right(t)),
LookupRefOrTempId::LookupRef(av) => lookup_map
.get(&*av)
.map(|x| lift(*x))
.map(Left)
// XXX TODO: fix this error kind!
.ok_or_else(|| {
DbErrorKind::UnrecognizedIdent(format!(
"couldn't lookup [a v]: {:?}",
(*av).clone()
))
.into()
}),
}
}
}
}
#[derive(Clone, Debug, Default)]
pub(crate) struct AddAndRetract {
pub(crate) add: BTreeSet<TypedValue>,
pub(crate) retract: BTreeSet<TypedValue>,
}
// A trie-like structure mapping a -> e -> v that prefix compresses and makes uniqueness constraint
// checking more efficient. BTree* for deterministic errors.
pub(crate) type AEVTrie<'schema> =
BTreeMap<(Entid, &'schema Attribute), BTreeMap<Entid, AddAndRetract>>;

121
db/src/lib.rs Normal file
View file

@ -0,0 +1,121 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
extern crate failure;
extern crate indexmap;
extern crate itertools;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate log;
#[cfg(feature = "syncable")]
#[macro_use]
extern crate serde_derive;
extern crate petgraph;
extern crate rusqlite;
extern crate tabwriter;
extern crate time;
#[macro_use]
extern crate edn;
#[macro_use]
extern crate mentat_core;
extern crate db_traits;
#[macro_use]
extern crate core_traits;
extern crate mentat_sql;
use std::iter::repeat;
use itertools::Itertools;
use db_traits::errors::{DbErrorKind, Result};
#[macro_use]
pub mod debug;
mod add_retract_alter_set;
mod bootstrap;
pub mod cache;
pub mod db;
pub mod entids;
pub mod internal_types; // pub because we need them for building entities programmatically.
mod metadata;
mod schema;
pub mod timelines;
mod tx;
mod tx_checking;
pub mod tx_observer;
pub mod types;
mod upsert_resolution;
mod watcher;
// Export these for reference from sync code and tests.
pub use crate::bootstrap::{TX0, USER0, V1_PARTS};
pub static TIMELINE_MAIN: i64 = 0;
pub use crate::schema::{AttributeBuilder, AttributeValidation};
pub use crate::bootstrap::CORE_SCHEMA_VERSION;
use edn::symbols;
pub use crate::entids::DB_SCHEMA_CORE;
pub use crate::db::{new_connection, TypedSQLValue};
#[cfg(feature = "sqlcipher")]
pub use db::{change_encryption_key, new_connection_with_key};
pub use crate::watcher::TransactWatcher;
pub use crate::tx::{transact, transact_terms};
pub use crate::tx_observer::{InProgressObserverTransactWatcher, TxObservationService, TxObserver};
pub use crate::types::{AttributeSet, Partition, PartitionMap, TransactableValue, DB};
pub fn to_namespaced_keyword(s: &str) -> Result<symbols::Keyword> {
let splits = [':', '/'];
let mut i = s.split(&splits[..]);
let nsk = match (i.next(), i.next(), i.next(), i.next()) {
(Some(""), Some(namespace), Some(name), None) => {
Some(symbols::Keyword::namespaced(namespace, name))
}
_ => None,
};
nsk.ok_or_else(|| DbErrorKind::NotYetImplemented(format!("InvalidKeyword: {}", s)).into())
}
/// Prepare an SQL `VALUES` block, like (?, ?, ?), (?, ?, ?).
///
/// The number of values per tuple determines `(?, ?, ?)`. The number of tuples determines `(...), (...)`.
///
/// # Examples
///
/// ```rust
/// # use mentat_db::{repeat_values};
/// assert_eq!(repeat_values(1, 3), "(?), (?), (?)".to_string());
/// assert_eq!(repeat_values(3, 1), "(?, ?, ?)".to_string());
/// assert_eq!(repeat_values(2, 2), "(?, ?), (?, ?)".to_string());
/// ```
pub fn repeat_values(values_per_tuple: usize, tuples: usize) -> String {
assert!(values_per_tuple >= 1);
assert!(tuples >= 1);
// Like "(?, ?, ?)".
let inner = format!("({})", repeat("?").take(values_per_tuple).join(", "));
// Like "(?, ?, ?), (?, ?, ?)".
let values: String = repeat(inner).take(tuples).join(", ");
values
}

451
db/src/metadata.rs Normal file
View file

@ -0,0 +1,451 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
//! Most transactions can mutate the Mentat metadata by transacting assertions:
//!
//! - they can add (and, eventually, retract and alter) recognized idents using the `:db/ident`
//! attribute;
//!
//! - they can add (and, eventually, retract and alter) schema attributes using various `:db/*`
//! attributes;
//!
//! - eventually, they will be able to add (and possibly retract) entid partitions using a Mentat
//! equivalent (perhaps :db/partition or :db.partition/start) to Datomic's `:db.install/partition`
//! attribute.
//!
//! This module recognizes, validates, applies, and reports on these mutations.
use failure::ResultExt;
use std::collections::btree_map::Entry;
use std::collections::{BTreeMap, BTreeSet};
use crate::add_retract_alter_set::AddRetractAlterSet;
use crate::entids;
use db_traits::errors::{DbErrorKind, Result};
use edn::symbols;
use core_traits::{attribute, Entid, TypedValue, ValueType};
use mentat_core::{AttributeMap, Schema};
use crate::schema::{AttributeBuilder, AttributeValidation};
use crate::types::EAV;
/// An alteration to an attribute.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum AttributeAlteration {
/// From http://blog.datomic.com/2014/01/schema-alteration.html:
/// - rename attributes
/// - rename your own programmatic identities (uses of :db/ident)
/// - add or remove indexes
Index,
/// - add or remove uniqueness constraints
Unique,
/// - change attribute cardinality
Cardinality,
/// - change whether history is retained for an attribute
NoHistory,
/// - change whether an attribute is treated as a component
IsComponent,
}
/// An alteration to an ident.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub enum IdentAlteration {
Ident(symbols::Keyword),
}
/// Summarizes changes to metadata such as a a `Schema` and (in the future) a `PartitionMap`.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct MetadataReport {
// Entids that were not present in the original `AttributeMap` that was mutated.
pub attributes_installed: BTreeSet<Entid>,
// Entids that were present in the original `AttributeMap` that was mutated, together with a
// representation of the mutations that were applied.
pub attributes_altered: BTreeMap<Entid, Vec<AttributeAlteration>>,
// Idents that were installed into the `AttributeMap`.
pub idents_altered: BTreeMap<Entid, IdentAlteration>,
}
impl MetadataReport {
pub fn attributes_did_change(&self) -> bool {
!(self.attributes_installed.is_empty() && self.attributes_altered.is_empty())
}
}
/// Update an 'AttributeMap' in place given two sets of ident and attribute retractions, which
/// together contain enough information to reason about a "schema retraction".
///
/// Schema may only be retracted if all of its necessary attributes are being retracted:
/// - :db/ident, :db/valueType, :db/cardinality.
///
/// Note that this is currently incomplete/flawed:
/// - we're allowing optional attributes to not be retracted and dangle afterwards
///
/// Returns a set of attribute retractions which do not involve schema-defining attributes.
fn update_attribute_map_from_schema_retractions(
attribute_map: &mut AttributeMap,
retractions: Vec<EAV>,
ident_retractions: &BTreeMap<Entid, symbols::Keyword>,
) -> Result<Vec<EAV>> {
// Process retractions of schema attributes first. It's allowed to retract a schema attribute
// if all of the schema-defining schema attributes are being retracted.
// A defining set of attributes is :db/ident, :db/valueType, :db/cardinality.
let mut filtered_retractions = vec![];
let mut suspect_retractions = vec![];
// Filter out sets of schema altering retractions.
let mut eas = BTreeMap::new();
for (e, a, v) in retractions.into_iter() {
if entids::is_a_schema_attribute(a) {
eas.entry(e).or_insert_with(Vec::new).push(a);
suspect_retractions.push((e, a, v));
} else {
filtered_retractions.push((e, a, v));
}
}
// TODO (see https://github.com/mozilla/mentat/issues/796).
// Retraction of idents is allowed, but if an ident names a schema attribute, then we should enforce
// retraction of all of the associated schema attributes.
// Unfortunately, our current in-memory schema representation (namely, how we define an Attribute) is not currently
// rich enough: it lacks distinction between presence and absence, and instead assumes default values.
// Currently, in order to do this enforcement correctly, we'd need to inspect 'datoms'.
// Here is an incorrect way to enforce this. It's incorrect because it prevents us from retracting non-"schema naming" idents.
// for retracted_e in ident_retractions.keys() {
// if !eas.contains_key(retracted_e) {
// bail!(DbErrorKind::BadSchemaAssertion(format!("Retracting :db/ident of a schema without retracting its defining attributes is not permitted.")));
// }
// }
for (e, a, v) in suspect_retractions.into_iter() {
let attributes = eas.get(&e).unwrap();
// Found a set of retractions which negate a schema.
if attributes.contains(&entids::DB_CARDINALITY)
&& attributes.contains(&entids::DB_VALUE_TYPE)
{
// Ensure that corresponding :db/ident is also being retracted at the same time.
if ident_retractions.contains_key(&e) {
// Remove attributes corresponding to retracted attribute.
attribute_map.remove(&e);
} else {
bail!(DbErrorKind::BadSchemaAssertion("Retracting defining attributes of a schema without retracting its :db/ident is not permitted.".to_string()));
}
} else {
filtered_retractions.push((e, a, v));
}
}
Ok(filtered_retractions)
}
/// Update a `AttributeMap` in place from the given `[e a typed_value]` triples.
///
/// This is suitable for producing a `AttributeMap` from the `schema` materialized view, which does not
/// contain install and alter markers.
///
/// Returns a report summarizing the mutations that were applied.
pub fn update_attribute_map_from_entid_triples(
attribute_map: &mut AttributeMap,
assertions: Vec<EAV>,
retractions: Vec<EAV>,
) -> Result<MetadataReport> {
fn attribute_builder_to_modify(
attribute_id: Entid,
existing: &AttributeMap,
) -> AttributeBuilder {
existing
.get(&attribute_id)
.map(AttributeBuilder::modify_attribute)
.unwrap_or_else(AttributeBuilder::default)
}
// Group mutations by impacted entid.
let mut builders: BTreeMap<Entid, AttributeBuilder> = BTreeMap::new();
// For retractions, we start with an attribute builder that's pre-populated with the existing
// attribute values. That allows us to check existing values and unset them.
for (entid, attr, ref value) in retractions {
let builder = builders
.entry(entid)
.or_insert_with(|| attribute_builder_to_modify(entid, attribute_map));
match attr {
// You can only retract :db/unique, :db/isComponent; all others must be altered instead
// of retracted, or are not allowed to change.
entids::DB_IS_COMPONENT => {
match value {
&TypedValue::Boolean(v) if builder.component == Some(v) => {
builder.component(false);
},
v => {
bail!(DbErrorKind::BadSchemaAssertion(format!("Attempted to retract :db/isComponent with the wrong value {:?}.", v)));
},
}
},
entids::DB_UNIQUE => {
match *value {
TypedValue::Ref(u) => {
match u {
entids::DB_UNIQUE_VALUE if builder.unique == Some(Some(attribute::Unique::Value)) => {
builder.non_unique();
},
entids::DB_UNIQUE_IDENTITY if builder.unique == Some(Some(attribute::Unique::Identity)) => {
builder.non_unique();
},
v => {
bail!(DbErrorKind::BadSchemaAssertion(format!("Attempted to retract :db/unique with the wrong value {}.", v)));
},
}
},
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [:db/retract _ :db/unique :db.unique/_] but got [:db/retract {} :db/unique {:?}]", entid, value)))
}
},
entids::DB_VALUE_TYPE |
entids::DB_CARDINALITY |
entids::DB_INDEX |
entids::DB_FULLTEXT |
entids::DB_NO_HISTORY => {
bail!(DbErrorKind::BadSchemaAssertion(format!("Retracting attribute {} for entity {} not permitted.", attr, entid)));
},
_ => {
bail!(DbErrorKind::BadSchemaAssertion(format!("Do not recognize attribute {} for entid {}", attr, entid)))
}
}
}
for (entid, attr, ref value) in assertions.into_iter() {
// For assertions, we can start with an empty attribute builder.
let builder = builders.entry(entid).or_insert_with(Default::default);
// TODO: improve error messages throughout.
match attr {
entids::DB_VALUE_TYPE => {
match *value {
TypedValue::Ref(entids::DB_TYPE_BOOLEAN) => { builder.value_type(ValueType::Boolean); },
TypedValue::Ref(entids::DB_TYPE_DOUBLE) => { builder.value_type(ValueType::Double); },
TypedValue::Ref(entids::DB_TYPE_INSTANT) => { builder.value_type(ValueType::Instant); },
TypedValue::Ref(entids::DB_TYPE_KEYWORD) => { builder.value_type(ValueType::Keyword); },
TypedValue::Ref(entids::DB_TYPE_LONG) => { builder.value_type(ValueType::Long); },
TypedValue::Ref(entids::DB_TYPE_REF) => { builder.value_type(ValueType::Ref); },
TypedValue::Ref(entids::DB_TYPE_STRING) => { builder.value_type(ValueType::String); },
TypedValue::Ref(entids::DB_TYPE_UUID) => { builder.value_type(ValueType::Uuid); },
TypedValue::Ref(entids::DB_TYPE_BYTES) => { builder.value_type(ValueType::Bytes); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/valueType :db.type/*] but got [... :db/valueType {:?}] for entid {} and attribute {}", value, entid, attr)))
}
},
entids::DB_CARDINALITY => {
match *value {
TypedValue::Ref(entids::DB_CARDINALITY_MANY) => { builder.multival(true); },
TypedValue::Ref(entids::DB_CARDINALITY_ONE) => { builder.multival(false); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/cardinality :db.cardinality/many|:db.cardinality/one] but got [... :db/cardinality {:?}]", value)))
}
},
entids::DB_UNIQUE => {
match *value {
TypedValue::Ref(entids::DB_UNIQUE_VALUE) => { builder.unique(attribute::Unique::Value); },
TypedValue::Ref(entids::DB_UNIQUE_IDENTITY) => { builder.unique(attribute::Unique::Identity); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/unique :db.unique/value|:db.unique/identity] but got [... :db/unique {:?}]", value)))
}
},
entids::DB_INDEX => {
match *value {
TypedValue::Boolean(x) => { builder.index(x); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/index true|false] but got [... :db/index {:?}]", value)))
}
},
entids::DB_FULLTEXT => {
match *value {
TypedValue::Boolean(x) => { builder.fulltext(x); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/fulltext true|false] but got [... :db/fulltext {:?}]", value)))
}
},
entids::DB_IS_COMPONENT => {
match *value {
TypedValue::Boolean(x) => { builder.component(x); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/isComponent true|false] but got [... :db/isComponent {:?}]", value)))
}
},
entids::DB_NO_HISTORY => {
match *value {
TypedValue::Boolean(x) => { builder.no_history(x); },
_ => bail!(DbErrorKind::BadSchemaAssertion(format!("Expected [... :db/noHistory true|false] but got [... :db/noHistory {:?}]", value)))
}
},
_ => {
bail!(DbErrorKind::BadSchemaAssertion(format!("Do not recognize attribute {} for entid {}", attr, entid)))
}
}
}
let mut attributes_installed: BTreeSet<Entid> = BTreeSet::default();
let mut attributes_altered: BTreeMap<Entid, Vec<AttributeAlteration>> = BTreeMap::default();
for (entid, builder) in builders.into_iter() {
match attribute_map.entry(entid) {
Entry::Vacant(entry) => {
// Validate once…
builder
.validate_install_attribute()
.context(DbErrorKind::BadSchemaAssertion(format!(
"Schema alteration for new attribute with entid {} is not valid",
entid
)))?;
// … and twice, now we have the Attribute.
let a = builder.build();
a.validate(|| entid.to_string())?;
entry.insert(a);
attributes_installed.insert(entid);
}
Entry::Occupied(mut entry) => {
builder
.validate_alter_attribute()
.context(DbErrorKind::BadSchemaAssertion(format!(
"Schema alteration for existing attribute with entid {} is not valid",
entid
)))?;
let mutations = builder.mutate(entry.get_mut());
attributes_altered.insert(entid, mutations);
}
}
}
Ok(MetadataReport {
attributes_installed,
attributes_altered,
idents_altered: BTreeMap::default(),
})
}
/// Update a `Schema` in place from the given `[e a typed_value added]` quadruples.
///
/// This layer enforces that ident assertions of the form [entid :db/ident ...] (as distinct from
/// attribute assertions) are present and correct.
///
/// This is suitable for mutating a `Schema` from an applied transaction.
///
/// Returns a report summarizing the mutations that were applied.
pub fn update_schema_from_entid_quadruples<U>(
schema: &mut Schema,
assertions: U,
) -> Result<MetadataReport>
where
U: IntoIterator<Item = (Entid, Entid, TypedValue, bool)>,
{
// Group attribute assertions into asserted, retracted, and updated. We assume all our
// attribute assertions are :db/cardinality :db.cardinality/one (so they'll only be added or
// retracted at most once), which means all attribute alterations are simple changes from an old
// value to a new value.
let mut attribute_set: AddRetractAlterSet<(Entid, Entid), TypedValue> =
AddRetractAlterSet::default();
let mut ident_set: AddRetractAlterSet<Entid, symbols::Keyword> = AddRetractAlterSet::default();
for (e, a, typed_value, added) in assertions.into_iter() {
// Here we handle :db/ident assertions.
if a == entids::DB_IDENT {
if let TypedValue::Keyword(ref keyword) = typed_value {
ident_set.witness(e, keyword.as_ref().clone(), added);
continue;
} else {
// Something is terribly wrong: the schema ensures we have a keyword.
unreachable!();
}
}
attribute_set.witness((e, a), typed_value, added);
}
// Collect triples.
let retracted_triples = attribute_set
.retracted
.into_iter()
.map(|((e, a), typed_value)| (e, a, typed_value));
let asserted_triples = attribute_set
.asserted
.into_iter()
.map(|((e, a), typed_value)| (e, a, typed_value));
let altered_triples = attribute_set
.altered
.into_iter()
.map(|((e, a), (_old_value, new_value))| (e, a, new_value));
// First we process retractions which remove schema.
// This operation consumes our current list of attribute retractions, producing a filtered one.
let non_schema_retractions = update_attribute_map_from_schema_retractions(
&mut schema.attribute_map,
retracted_triples.collect(),
&ident_set.retracted,
)?;
// Now we process all other retractions.
let report = update_attribute_map_from_entid_triples(
&mut schema.attribute_map,
asserted_triples.chain(altered_triples).collect(),
non_schema_retractions,
)?;
let mut idents_altered: BTreeMap<Entid, IdentAlteration> = BTreeMap::new();
// Asserted, altered, or retracted :db/idents update the relevant entids.
for (entid, ident) in ident_set.asserted {
schema.entid_map.insert(entid, ident.clone());
schema.ident_map.insert(ident.clone(), entid);
idents_altered.insert(entid, IdentAlteration::Ident(ident.clone()));
}
for (entid, (old_ident, new_ident)) in ident_set.altered {
schema.entid_map.insert(entid, new_ident.clone()); // Overwrite existing.
schema.ident_map.remove(&old_ident); // Remove old.
schema.ident_map.insert(new_ident.clone(), entid); // Insert new.
idents_altered.insert(entid, IdentAlteration::Ident(new_ident.clone()));
}
for (entid, ident) in &ident_set.retracted {
schema.entid_map.remove(entid);
schema.ident_map.remove(ident);
idents_altered.insert(*entid, IdentAlteration::Ident(ident.clone()));
}
// Component attributes need to change if either:
// - a component attribute changed
// - a schema attribute that was a component was retracted
// These two checks are a rather heavy-handed way of keeping schema's
// component_attributes up-to-date: most of the time we'll rebuild it
// even though it's not necessary (e.g. a schema attribute that's _not_
// a component was removed, or a non-component related attribute changed).
if report.attributes_did_change() || !ident_set.retracted.is_empty() {
schema.update_component_attributes();
}
Ok(MetadataReport {
idents_altered,
..report
})
}

641
db/src/schema.rs Normal file
View file

@ -0,0 +1,641 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use crate::db::TypedSQLValue;
use db_traits::errors::{DbErrorKind, Result};
use edn;
use edn::symbols;
use core_traits::{attribute, Attribute, Entid, KnownEntid, TypedValue, ValueType};
use crate::metadata;
use crate::metadata::AttributeAlteration;
use mentat_core::{AttributeMap, EntidMap, HasSchema, IdentMap, Schema};
pub trait AttributeValidation {
fn validate<F>(&self, ident: F) -> Result<()>
where
F: Fn() -> String;
}
impl AttributeValidation for Attribute {
fn validate<F>(&self, ident: F) -> Result<()>
where
F: Fn() -> String,
{
if self.unique == Some(attribute::Unique::Value) && !self.index {
bail!(DbErrorKind::BadSchemaAssertion(format!(
":db/unique :db/unique_value without :db/index true for entid: {}",
ident()
)))
}
if self.unique == Some(attribute::Unique::Identity) && !self.index {
bail!(DbErrorKind::BadSchemaAssertion(format!(
":db/unique :db/unique_identity without :db/index true for entid: {}",
ident()
)))
}
if self.fulltext && self.value_type != ValueType::String {
bail!(DbErrorKind::BadSchemaAssertion(format!(
":db/fulltext true without :db/valueType :db.type/string for entid: {}",
ident()
)))
}
if self.fulltext && !self.index {
bail!(DbErrorKind::BadSchemaAssertion(format!(
":db/fulltext true without :db/index true for entid: {}",
ident()
)))
}
if self.component && self.value_type != ValueType::Ref {
bail!(DbErrorKind::BadSchemaAssertion(format!(
":db/isComponent true without :db/valueType :db.type/ref for entid: {}",
ident()
)))
}
// TODO: consider warning if we have :db/index true for :db/valueType :db.type/string,
// since this may be inefficient. More generally, we should try to drive complex
// :db/valueType (string, uri, json in the future) users to opt-in to some hash-indexing
// scheme, as discussed in https://github.com/mozilla/mentat/issues/69.
Ok(())
}
}
/// Return `Ok(())` if `attribute_map` defines a valid Mentat schema.
fn validate_attribute_map(entid_map: &EntidMap, attribute_map: &AttributeMap) -> Result<()> {
for (entid, attribute) in attribute_map {
let ident = || {
entid_map
.get(entid)
.map(|ident| ident.to_string())
.unwrap_or_else(|| entid.to_string())
};
attribute.validate(ident)?;
}
Ok(())
}
#[derive(Clone, Debug, Default, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct AttributeBuilder {
helpful: bool,
pub value_type: Option<ValueType>,
pub multival: Option<bool>,
pub unique: Option<Option<attribute::Unique>>,
pub index: Option<bool>,
pub fulltext: Option<bool>,
pub component: Option<bool>,
pub no_history: Option<bool>,
}
impl AttributeBuilder {
/// Make a new AttributeBuilder for human consumption: it will help you
/// by flipping relevant flags.
pub fn helpful() -> Self {
AttributeBuilder {
helpful: true,
..Default::default()
}
}
/// Make a new AttributeBuilder from an existing Attribute. This is important to allow
/// retraction. Only attributes that we allow to change are duplicated here.
pub fn modify_attribute(attribute: &Attribute) -> Self {
let mut ab = AttributeBuilder::default();
ab.multival = Some(attribute.multival);
ab.unique = Some(attribute.unique);
ab.component = Some(attribute.component);
ab
}
pub fn value_type(&mut self, value_type: ValueType) -> &mut Self {
self.value_type = Some(value_type);
self
}
pub fn multival(&mut self, multival: bool) -> &mut Self {
self.multival = Some(multival);
self
}
pub fn non_unique(&mut self) -> &mut Self {
self.unique = Some(None);
self
}
pub fn unique(&mut self, unique: attribute::Unique) -> &mut Self {
if self.helpful && unique == attribute::Unique::Identity {
self.index = Some(true);
}
self.unique = Some(Some(unique));
self
}
pub fn index(&mut self, index: bool) -> &mut Self {
self.index = Some(index);
self
}
pub fn fulltext(&mut self, fulltext: bool) -> &mut Self {
self.fulltext = Some(fulltext);
if self.helpful && fulltext {
self.index = Some(true);
}
self
}
pub fn component(&mut self, component: bool) -> &mut Self {
self.component = Some(component);
self
}
pub fn no_history(&mut self, no_history: bool) -> &mut Self {
self.no_history = Some(no_history);
self
}
pub fn validate_install_attribute(&self) -> Result<()> {
if self.value_type.is_none() {
bail!(DbErrorKind::BadSchemaAssertion(
"Schema attribute for new attribute does not set :db/valueType".into()
));
}
Ok(())
}
pub fn validate_alter_attribute(&self) -> Result<()> {
if self.value_type.is_some() {
bail!(DbErrorKind::BadSchemaAssertion(
"Schema alteration must not set :db/valueType".into()
));
}
if self.fulltext.is_some() {
bail!(DbErrorKind::BadSchemaAssertion(
"Schema alteration must not set :db/fulltext".into()
));
}
Ok(())
}
pub fn build(&self) -> Attribute {
let mut attribute = Attribute::default();
if let Some(value_type) = self.value_type {
attribute.value_type = value_type;
}
if let Some(fulltext) = self.fulltext {
attribute.fulltext = fulltext;
}
if let Some(multival) = self.multival {
attribute.multival = multival;
}
if let Some(ref unique) = self.unique {
attribute.unique = *unique;
}
if let Some(index) = self.index {
attribute.index = index;
}
if let Some(component) = self.component {
attribute.component = component;
}
if let Some(no_history) = self.no_history {
attribute.no_history = no_history;
}
attribute
}
pub fn mutate(&self, attribute: &mut Attribute) -> Vec<AttributeAlteration> {
let mut mutations = Vec::new();
if let Some(multival) = self.multival {
if multival != attribute.multival {
attribute.multival = multival;
mutations.push(AttributeAlteration::Cardinality);
}
}
if let Some(ref unique) = self.unique {
if *unique != attribute.unique {
attribute.unique = *unique;
mutations.push(AttributeAlteration::Unique);
}
} else if attribute.unique != None {
attribute.unique = None;
mutations.push(AttributeAlteration::Unique);
}
if let Some(index) = self.index {
if index != attribute.index {
attribute.index = index;
mutations.push(AttributeAlteration::Index);
}
}
if let Some(component) = self.component {
if component != attribute.component {
attribute.component = component;
mutations.push(AttributeAlteration::IsComponent);
}
}
if let Some(no_history) = self.no_history {
if no_history != attribute.no_history {
attribute.no_history = no_history;
mutations.push(AttributeAlteration::NoHistory);
}
}
mutations
}
}
pub trait SchemaBuilding {
fn require_ident(&self, entid: Entid) -> Result<&symbols::Keyword>;
fn require_entid(&self, ident: &symbols::Keyword) -> Result<KnownEntid>;
fn require_attribute_for_entid(&self, entid: Entid) -> Result<&Attribute>;
fn from_ident_map_and_attribute_map(
ident_map: IdentMap,
attribute_map: AttributeMap,
) -> Result<Schema>;
fn from_ident_map_and_triples<U>(ident_map: IdentMap, assertions: U) -> Result<Schema>
where
U: IntoIterator<Item = (symbols::Keyword, symbols::Keyword, TypedValue)>;
}
impl SchemaBuilding for Schema {
fn require_ident(&self, entid: Entid) -> Result<&symbols::Keyword> {
self.get_ident(entid)
.ok_or_else(|| DbErrorKind::UnrecognizedEntid(entid).into())
}
fn require_entid(&self, ident: &symbols::Keyword) -> Result<KnownEntid> {
self.get_entid(&ident)
.ok_or_else(|| DbErrorKind::UnrecognizedIdent(ident.to_string()).into())
}
fn require_attribute_for_entid(&self, entid: Entid) -> Result<&Attribute> {
self.attribute_for_entid(entid)
.ok_or_else(|| DbErrorKind::UnrecognizedEntid(entid).into())
}
/// Create a valid `Schema` from the constituent maps.
fn from_ident_map_and_attribute_map(
ident_map: IdentMap,
attribute_map: AttributeMap,
) -> Result<Schema> {
let entid_map: EntidMap = ident_map.iter().map(|(k, v)| (*v, k.clone())).collect();
validate_attribute_map(&entid_map, &attribute_map)?;
Ok(Schema::new(ident_map, entid_map, attribute_map))
}
/// Turn vec![(Keyword(:ident), Keyword(:key), TypedValue(:value)), ...] into a Mentat `Schema`.
fn from_ident_map_and_triples<U>(ident_map: IdentMap, assertions: U) -> Result<Schema>
where
U: IntoIterator<Item = (symbols::Keyword, symbols::Keyword, TypedValue)>,
{
let entid_assertions: Result<Vec<(Entid, Entid, TypedValue)>> = assertions
.into_iter()
.map(|(symbolic_ident, symbolic_attr, value)| {
let ident: i64 = *ident_map
.get(&symbolic_ident)
.ok_or_else(|| DbErrorKind::UnrecognizedIdent(symbolic_ident.to_string()))?;
let attr: i64 = *ident_map
.get(&symbolic_attr)
.ok_or_else(|| DbErrorKind::UnrecognizedIdent(symbolic_attr.to_string()))?;
Ok((ident, attr, value))
})
.collect();
let mut schema =
Schema::from_ident_map_and_attribute_map(ident_map, AttributeMap::default())?;
let metadata_report = metadata::update_attribute_map_from_entid_triples(
&mut schema.attribute_map,
entid_assertions?,
// No retractions.
vec![],
)?;
// Rebuild the component attributes list if necessary.
if metadata_report.attributes_did_change() {
schema.update_component_attributes();
}
Ok(schema)
}
}
pub trait SchemaTypeChecking {
/// Do schema-aware typechecking and coercion.
///
/// Either assert that the given value is in the value type's value set, or (in limited cases)
/// coerce the given value into the value type's value set.
fn to_typed_value(
&self,
value: &edn::ValueAndSpan,
value_type: ValueType,
) -> Result<TypedValue>;
}
impl SchemaTypeChecking for Schema {
fn to_typed_value(
&self,
value: &edn::ValueAndSpan,
value_type: ValueType,
) -> Result<TypedValue> {
// TODO: encapsulate entid-ident-attribute for better error messages, perhaps by including
// the attribute (rather than just the attribute's value type) into this function or a
// wrapper function.
match TypedValue::from_edn_value(&value.clone().without_spans()) {
// We don't recognize this EDN at all. Get out!
None => bail!(DbErrorKind::BadValuePair(format!("{}", value), value_type)),
Some(typed_value) => match (value_type, typed_value) {
// Most types don't coerce at all.
(ValueType::Boolean, tv @ TypedValue::Boolean(_)) => Ok(tv),
(ValueType::Long, tv @ TypedValue::Long(_)) => Ok(tv),
(ValueType::Double, tv @ TypedValue::Double(_)) => Ok(tv),
(ValueType::String, tv @ TypedValue::String(_)) => Ok(tv),
(ValueType::Uuid, tv @ TypedValue::Uuid(_)) => Ok(tv),
(ValueType::Instant, tv @ TypedValue::Instant(_)) => Ok(tv),
(ValueType::Keyword, tv @ TypedValue::Keyword(_)) => Ok(tv),
(ValueType::Bytes, tv @ TypedValue::Bytes(_)) => Ok(tv),
// Ref coerces a little: we interpret some things depending on the schema as a Ref.
(ValueType::Ref, TypedValue::Long(x)) => Ok(TypedValue::Ref(x)),
(ValueType::Ref, TypedValue::Keyword(ref x)) => {
self.require_entid(&x).map(|entid| entid.into())
}
// Otherwise, we have a type mismatch.
// Enumerate all of the types here to allow the compiler to help us.
// We don't enumerate all `TypedValue` cases, though: that would multiply this
// collection by 8!
(vt @ ValueType::Boolean, _)
| (vt @ ValueType::Long, _)
| (vt @ ValueType::Double, _)
| (vt @ ValueType::String, _)
| (vt @ ValueType::Uuid, _)
| (vt @ ValueType::Instant, _)
| (vt @ ValueType::Keyword, _)
| (vt @ ValueType::Bytes, _)
| (vt @ ValueType::Ref, _) => {
bail!(DbErrorKind::BadValuePair(format!("{}", value), vt))
}
},
}
}
}
#[cfg(test)]
mod test {
use self::edn::Keyword;
use super::*;
fn add_attribute(schema: &mut Schema, ident: Keyword, entid: Entid, attribute: Attribute) {
schema.entid_map.insert(entid, ident.clone());
schema.ident_map.insert(ident, entid);
if attribute.component {
schema.component_attributes.push(entid);
}
schema.attribute_map.insert(entid, attribute);
}
#[test]
fn validate_attribute_map_success() {
let mut schema = Schema::default();
// attribute that is not an index has no uniqueness
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bar"),
97,
Attribute {
index: false,
value_type: ValueType::Boolean,
fulltext: false,
unique: None,
multival: false,
component: false,
no_history: false,
},
);
// attribute is unique by value and an index
add_attribute(
&mut schema,
Keyword::namespaced("foo", "baz"),
98,
Attribute {
index: true,
value_type: ValueType::Long,
fulltext: false,
unique: Some(attribute::Unique::Value),
multival: false,
component: false,
no_history: false,
},
);
// attribue is unique by identity and an index
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bat"),
99,
Attribute {
index: true,
value_type: ValueType::Ref,
fulltext: false,
unique: Some(attribute::Unique::Identity),
multival: false,
component: false,
no_history: false,
},
);
// attribute is a components and a `Ref`
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bak"),
100,
Attribute {
index: false,
value_type: ValueType::Ref,
fulltext: false,
unique: None,
multival: false,
component: true,
no_history: false,
},
);
// fulltext attribute is a string and an index
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bap"),
101,
Attribute {
index: true,
value_type: ValueType::String,
fulltext: true,
unique: None,
multival: false,
component: false,
no_history: false,
},
);
assert!(validate_attribute_map(&schema.entid_map, &schema.attribute_map).is_ok());
}
#[test]
fn invalid_schema_unique_value_not_index() {
let mut schema = Schema::default();
// attribute unique by value but not index
let ident = Keyword::namespaced("foo", "bar");
add_attribute(
&mut schema,
ident,
99,
Attribute {
index: false,
value_type: ValueType::Boolean,
fulltext: false,
unique: Some(attribute::Unique::Value),
multival: false,
component: false,
no_history: false,
},
);
let err = validate_attribute_map(&schema.entid_map, &schema.attribute_map)
.err()
.map(|e| e.kind());
assert_eq!(
err,
Some(DbErrorKind::BadSchemaAssertion(
":db/unique :db/unique_value without :db/index true for entid: :foo/bar".into()
))
);
}
#[test]
fn invalid_schema_unique_identity_not_index() {
let mut schema = Schema::default();
// attribute is unique by identity but not index
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bar"),
99,
Attribute {
index: false,
value_type: ValueType::Long,
fulltext: false,
unique: Some(attribute::Unique::Identity),
multival: false,
component: false,
no_history: false,
},
);
let err = validate_attribute_map(&schema.entid_map, &schema.attribute_map)
.err()
.map(|e| e.kind());
assert_eq!(
err,
Some(DbErrorKind::BadSchemaAssertion(
":db/unique :db/unique_identity without :db/index true for entid: :foo/bar".into()
))
);
}
#[test]
fn invalid_schema_component_not_ref() {
let mut schema = Schema::default();
// attribute that is a component is not a `Ref`
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bar"),
99,
Attribute {
index: false,
value_type: ValueType::Boolean,
fulltext: false,
unique: None,
multival: false,
component: true,
no_history: false,
},
);
let err = validate_attribute_map(&schema.entid_map, &schema.attribute_map)
.err()
.map(|e| e.kind());
assert_eq!(
err,
Some(DbErrorKind::BadSchemaAssertion(
":db/isComponent true without :db/valueType :db.type/ref for entid: :foo/bar"
.into()
))
);
}
#[test]
fn invalid_schema_fulltext_not_index() {
let mut schema = Schema::default();
// attribute that is fulltext is not an index
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bar"),
99,
Attribute {
index: false,
value_type: ValueType::String,
fulltext: true,
unique: None,
multival: false,
component: false,
no_history: false,
},
);
let err = validate_attribute_map(&schema.entid_map, &schema.attribute_map)
.err()
.map(|e| e.kind());
assert_eq!(
err,
Some(DbErrorKind::BadSchemaAssertion(
":db/fulltext true without :db/index true for entid: :foo/bar".into()
))
);
}
fn invalid_schema_fulltext_index_not_string() {
let mut schema = Schema::default();
// attribute that is fulltext and not a `String`
add_attribute(
&mut schema,
Keyword::namespaced("foo", "bar"),
99,
Attribute {
index: true,
value_type: ValueType::Long,
fulltext: true,
unique: None,
multival: false,
component: false,
no_history: false,
},
);
let err = validate_attribute_map(&schema.entid_map, &schema.attribute_map)
.err()
.map(|e| e.kind());
assert_eq!(
err,
Some(DbErrorKind::BadSchemaAssertion(
":db/fulltext true without :db/valueType :db.type/string for entid: :foo/bar"
.into()
))
);
}
}

862
db/src/timelines.rs Normal file
View file

@ -0,0 +1,862 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use std::ops::RangeFrom;
use rusqlite::{self, params_from_iter};
use db_traits::errors::{DbErrorKind, Result};
use core_traits::{Entid, KnownEntid, TypedValue};
use mentat_core::Schema;
use edn::InternSet;
use edn::entities::OpType;
use crate::db;
use crate::db::TypedSQLValue;
use crate::tx::{transact_terms_with_action, TransactorAction};
use crate::types::PartitionMap;
use crate::internal_types::{Term, TermWithoutTempIds};
use crate::watcher::NullWatcher;
/// Collects a supplied tx range into an DESC ordered Vec of valid txs,
/// ensuring they all belong to the same timeline.
fn collect_ordered_txs_to_move(
conn: &rusqlite::Connection,
txs_from: RangeFrom<Entid>,
timeline: Entid,
) -> Result<Vec<Entid>> {
let mut stmt = conn.prepare("SELECT tx, timeline FROM timelined_transactions WHERE tx >= ? AND timeline = ? GROUP BY tx ORDER BY tx DESC")?;
let mut rows = stmt.query_and_then(
&[&txs_from.start, &timeline],
|row: &rusqlite::Row| -> Result<(Entid, Entid)> { Ok((row.get(0)?, row.get(1)?)) },
)?;
let mut txs = vec![];
// TODO do this in SQL instead?
let timeline = match rows.next() {
Some(t) => {
let t = t?;
txs.push(t.0);
t.1
}
None => bail!(DbErrorKind::TimelinesInvalidRange),
};
for t in rows {
let t = t?;
txs.push(t.0);
if t.1 != timeline {
bail!(DbErrorKind::TimelinesMixed);
}
}
Ok(txs)
}
fn move_transactions_to(
conn: &rusqlite::Connection,
tx_ids: &[Entid],
new_timeline: Entid,
) -> Result<()> {
// Move specified transactions over to a specified timeline.
conn.execute(
&format!(
"UPDATE timelined_transactions SET timeline = {} WHERE tx IN {}",
new_timeline,
crate::repeat_values(tx_ids.len(), 1)
),
params_from_iter(tx_ids.iter()),
)?;
Ok(())
}
fn remove_tx_from_datoms(conn: &rusqlite::Connection, tx_id: Entid) -> Result<()> {
conn.execute("DELETE FROM datoms WHERE e = ?", &[&tx_id])?;
Ok(())
}
fn is_timeline_empty(conn: &rusqlite::Connection, timeline: Entid) -> Result<bool> {
let mut stmt = conn.prepare(
"SELECT timeline FROM timelined_transactions WHERE timeline = ? GROUP BY timeline",
)?;
let rows = stmt.query_and_then(&[&timeline], |row| -> Result<i64> { Ok(row.get(0)?) })?;
Ok(rows.count() == 0)
}
/// Get terms for tx_id, reversing them in meaning (swap add & retract).
fn reversed_terms_for(
conn: &rusqlite::Connection,
tx_id: Entid,
) -> Result<Vec<TermWithoutTempIds>> {
let mut stmt = conn.prepare("SELECT e, a, v, value_type_tag, tx, added FROM timelined_transactions WHERE tx = ? AND timeline = ? ORDER BY tx DESC")?;
let rows = stmt.query_and_then(
&[&tx_id, &crate::TIMELINE_MAIN],
|row| -> Result<TermWithoutTempIds> {
let op = if row.get(5)? {
OpType::Retract
} else {
OpType::Add
};
Ok(Term::AddOrRetract(
op,
KnownEntid(row.get(0)?),
row.get(1)?,
TypedValue::from_sql_value_pair(row.get(2)?, row.get(3)?)?,
))
},
)?;
let mut terms = vec![];
for row in rows {
terms.push(row?);
}
Ok(terms)
}
/// Move specified transaction RangeFrom off of main timeline.
pub fn move_from_main_timeline(
conn: &rusqlite::Connection,
schema: &Schema,
partition_map: PartitionMap,
txs_from: RangeFrom<Entid>,
new_timeline: Entid,
) -> Result<(Option<Schema>, PartitionMap)> {
if new_timeline == crate::TIMELINE_MAIN {
bail!(DbErrorKind::NotYetImplemented(
"Can't move transactions to main timeline".to_string()
));
}
// We don't currently ensure that moving transactions onto a non-empty timeline
// will result in sensible end-state for that timeline.
// Let's remove that foot gun by prohibiting moving transactions to a non-empty timeline.
if !is_timeline_empty(conn, new_timeline)? {
bail!(DbErrorKind::TimelinesMoveToNonEmpty);
}
let txs_to_move = collect_ordered_txs_to_move(conn, txs_from, crate::TIMELINE_MAIN)?;
let mut last_schema = None;
for tx_id in &txs_to_move {
let reversed_terms = reversed_terms_for(conn, *tx_id)?;
// Rewind schema and datoms.
let (report, _, new_schema, _) = transact_terms_with_action(
conn,
partition_map.clone(),
schema,
schema,
NullWatcher(),
reversed_terms.into_iter().map(|t| t.rewrap()),
InternSet::new(),
TransactorAction::Materialize,
)?;
// Rewind operation generated a 'tx' and a 'txInstant' assertion, which got
// inserted into the 'datoms' table (due to TransactorAction::Materialize).
// This is problematic. If we transact a few more times, the transactor will
// generate the same 'tx', but with a different 'txInstant'.
// The end result will be a transaction which has a phantom
// retraction of a txInstant, since transactor operates against the state of
// 'datoms', and not against the 'transactions' table.
// A quick workaround is to just remove the bad txInstant datom.
// See test_clashing_tx_instants test case.
remove_tx_from_datoms(conn, report.tx_id)?;
last_schema = new_schema;
}
// Move transactions over to the target timeline.
move_transactions_to(conn, &txs_to_move, new_timeline)?;
Ok((last_schema, db::read_partition_map(conn)?))
}
#[cfg(test)]
mod tests {
use super::*;
use edn;
use std::borrow::Borrow;
use crate::debug::TestConn;
use crate::bootstrap;
// For convenience during testing.
// Real consumers will perform similar operations when appropriate.
fn update_conn(conn: &mut TestConn, schema: &Option<Schema>, pmap: &PartitionMap) {
match schema {
Some(ref s) => conn.schema = s.clone(),
None => (),
};
conn.partition_map = pmap.clone();
}
#[test]
fn test_pop_simple() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let t = r#"
[{:db/id :db/doc :db/doc "test"}]
"#;
let partition_map0 = conn.partition_map.clone();
let report1 = assert_transact!(conn, t);
let partition_map1 = conn.partition_map.clone();
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
conn.last_tx_id()..,
1,
)
.expect("moved single tx");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
assert_eq!(new_partition_map, partition_map0);
conn.partition_map = partition_map0;
let report2 = assert_transact!(conn, t);
let partition_map2 = conn.partition_map.clone();
// Ensure that we can't move transactions to a non-empty timeline:
move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
conn.last_tx_id()..,
1,
)
.expect_err("Can't move transactions to a non-empty timeline");
assert_eq!(report1.tx_id, report2.tx_id);
assert_eq!(partition_map1, partition_map2);
assert_matches!(
conn.datoms(),
r#"
[[37 :db/doc "test"]]
"#
);
assert_matches!(
conn.transactions(),
r#"
[[[37 :db/doc "test" ?tx true]
[?tx :db/txInstant ?ms ?tx true]]]
"#
);
}
#[test]
fn test_pop_ident() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let t = r#"
[{:db/ident :test/entid :db/doc "test" :db.schema/version 1}]
"#;
let partition_map0 = conn.partition_map.clone();
let schema0 = conn.schema.clone();
let report1 = assert_transact!(conn, t);
let partition_map1 = conn.partition_map.clone();
let schema1 = conn.schema.clone();
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
conn.last_tx_id()..,
1,
)
.expect("moved single tx");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
assert_eq!(conn.partition_map, partition_map0);
assert_eq!(conn.schema, schema0);
let report2 = assert_transact!(conn, t);
assert_eq!(report1.tx_id, report2.tx_id);
assert_eq!(conn.partition_map, partition_map1);
assert_eq!(conn.schema, schema1);
assert_matches!(
conn.datoms(),
r#"
[[?e :db/ident :test/entid]
[?e :db/doc "test"]
[?e :db.schema/version 1]]
"#
);
assert_matches!(
conn.transactions(),
r#"
[[[?e :db/ident :test/entid ?tx true]
[?e :db/doc "test" ?tx true]
[?e :db.schema/version 1 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]]
"#
);
}
#[test]
fn test_clashing_tx_instants() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
// Transact a basic schema.
assert_transact!(
conn,
r#"
[{:db/ident :person/name :db/valueType :db.type/string :db/cardinality :db.cardinality/one :db/unique :db.unique/identity :db/index true}]
"#
);
// Make an assertion against our schema.
assert_transact!(conn, r#"[{:person/name "Vanya"}]"#);
// Move that assertion away from the main timeline.
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
conn.last_tx_id()..,
1,
)
.expect("moved single tx");
update_conn(&mut conn, &new_schema, &new_partition_map);
// Assert that our datoms are now just the schema.
assert_matches!(
conn.datoms(),
"
[[?e :db/ident :person/name]
[?e :db/valueType :db.type/string]
[?e :db/cardinality :db.cardinality/one]
[?e :db/unique :db.unique/identity]
[?e :db/index true]]"
);
// Same for transactions.
assert_matches!(
conn.transactions(),
"
[[[?e :db/ident :person/name ?tx true]
[?e :db/valueType :db.type/string ?tx true]
[?e :db/cardinality :db.cardinality/one ?tx true]
[?e :db/unique :db.unique/identity ?tx true]
[?e :db/index true ?tx true]
[?tx :db/txInstant ?ms ?tx true]]]"
);
// Re-assert our initial fact against our schema.
assert_transact!(
conn,
r#"
[[:db/add "tempid" :person/name "Vanya"]]"#
);
// Now, change that fact. This is the "clashing" transaction, if we're
// performing a timeline move using the transactor.
assert_transact!(
conn,
r#"
[[:db/add (lookup-ref :person/name "Vanya") :person/name "Ivan"]]"#
);
// Assert that our datoms are now the schema and the final assertion.
assert_matches!(
conn.datoms(),
r#"
[[?e1 :db/ident :person/name]
[?e1 :db/valueType :db.type/string]
[?e1 :db/cardinality :db.cardinality/one]
[?e1 :db/unique :db.unique/identity]
[?e1 :db/index true]
[?e2 :person/name "Ivan"]]
"#
);
// Assert that we have three correct looking transactions.
// This will fail if we're not cleaning up the 'datoms' table
// after the timeline move.
assert_matches!(
conn.transactions(),
r#"
[[
[?e1 :db/ident :person/name ?tx1 true]
[?e1 :db/valueType :db.type/string ?tx1 true]
[?e1 :db/cardinality :db.cardinality/one ?tx1 true]
[?e1 :db/unique :db.unique/identity ?tx1 true]
[?e1 :db/index true ?tx1 true]
[?tx1 :db/txInstant ?ms1 ?tx1 true]
]
[
[?e2 :person/name "Vanya" ?tx2 true]
[?tx2 :db/txInstant ?ms2 ?tx2 true]
]
[
[?e2 :person/name "Ivan" ?tx3 true]
[?e2 :person/name "Vanya" ?tx3 false]
[?tx3 :db/txInstant ?ms3 ?tx3 true]
]]
"#
);
}
#[test]
fn test_pop_schema() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let t = r#"
[{:db/id "e" :db/ident :test/one :db/valueType :db.type/long :db/cardinality :db.cardinality/one}
{:db/id "f" :db/ident :test/many :db/valueType :db.type/long :db/cardinality :db.cardinality/many}]
"#;
let partition_map0 = conn.partition_map.clone();
let schema0 = conn.schema.clone();
let report1 = assert_transact!(conn, t);
let partition_map1 = conn.partition_map.clone();
let schema1 = conn.schema.clone();
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
report1.tx_id..,
1,
)
.expect("moved single tx");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
assert_eq!(conn.partition_map, partition_map0);
assert_eq!(conn.schema, schema0);
let report2 = assert_transact!(conn, t);
let partition_map2 = conn.partition_map.clone();
let schema2 = conn.schema.clone();
assert_eq!(report1.tx_id, report2.tx_id);
assert_eq!(partition_map1, partition_map2);
assert_eq!(schema1, schema2);
assert_matches!(
conn.datoms(),
r#"
[[?e1 :db/ident :test/one]
[?e1 :db/valueType :db.type/long]
[?e1 :db/cardinality :db.cardinality/one]
[?e2 :db/ident :test/many]
[?e2 :db/valueType :db.type/long]
[?e2 :db/cardinality :db.cardinality/many]]
"#
);
assert_matches!(
conn.transactions(),
r#"
[[[?e1 :db/ident :test/one ?tx1 true]
[?e1 :db/valueType :db.type/long ?tx1 true]
[?e1 :db/cardinality :db.cardinality/one ?tx1 true]
[?e2 :db/ident :test/many ?tx1 true]
[?e2 :db/valueType :db.type/long ?tx1 true]
[?e2 :db/cardinality :db.cardinality/many ?tx1 true]
[?tx1 :db/txInstant ?ms ?tx1 true]]]
"#
);
}
#[test]
fn test_pop_schema_all_attributes() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let t = r#"
[{
:db/id "e"
:db/ident :test/one
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value
:db/index true
:db/fulltext true
}]
"#;
let partition_map0 = conn.partition_map.clone();
let schema0 = conn.schema.clone();
let report1 = assert_transact!(conn, t);
let partition_map1 = conn.partition_map.clone();
let schema1 = conn.schema.clone();
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
report1.tx_id..,
1,
)
.expect("moved single tx");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
assert_eq!(conn.partition_map, partition_map0);
assert_eq!(conn.schema, schema0);
let report2 = assert_transact!(conn, t);
let partition_map2 = conn.partition_map.clone();
let schema2 = conn.schema.clone();
assert_eq!(report1.tx_id, report2.tx_id);
assert_eq!(partition_map1, partition_map2);
assert_eq!(schema1, schema2);
assert_matches!(
conn.datoms(),
r#"
[[?e1 :db/ident :test/one]
[?e1 :db/valueType :db.type/string]
[?e1 :db/cardinality :db.cardinality/one]
[?e1 :db/unique :db.unique/value]
[?e1 :db/index true]
[?e1 :db/fulltext true]]
"#
);
assert_matches!(
conn.transactions(),
r#"
[[[?e1 :db/ident :test/one ?tx1 true]
[?e1 :db/valueType :db.type/string ?tx1 true]
[?e1 :db/cardinality :db.cardinality/one ?tx1 true]
[?e1 :db/unique :db.unique/value ?tx1 true]
[?e1 :db/index true ?tx1 true]
[?e1 :db/fulltext true ?tx1 true]
[?tx1 :db/txInstant ?ms ?tx1 true]]]
"#
);
}
#[test]
fn test_pop_schema_all_attributes_component() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let t = r#"
[{
:db/id "e"
:db/ident :test/one
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value
:db/index true
:db/isComponent true
}]
"#;
let partition_map0 = conn.partition_map.clone();
let schema0 = conn.schema.clone();
let report1 = assert_transact!(conn, t);
let partition_map1 = conn.partition_map.clone();
let schema1 = conn.schema.clone();
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
report1.tx_id..,
1,
)
.expect("moved single tx");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
assert_eq!(conn.partition_map, partition_map0);
// Assert all of schema's components individually, for some guidance in case of failures:
assert_eq!(conn.schema.entid_map, schema0.entid_map);
assert_eq!(conn.schema.ident_map, schema0.ident_map);
assert_eq!(conn.schema.attribute_map, schema0.attribute_map);
assert_eq!(
conn.schema.component_attributes,
schema0.component_attributes
);
// Assert the whole schema, just in case we missed something:
assert_eq!(conn.schema, schema0);
let report2 = assert_transact!(conn, t);
let partition_map2 = conn.partition_map.clone();
let schema2 = conn.schema.clone();
assert_eq!(report1.tx_id, report2.tx_id);
assert_eq!(partition_map1, partition_map2);
assert_eq!(schema1, schema2);
assert_matches!(
conn.datoms(),
r#"
[[?e1 :db/ident :test/one]
[?e1 :db/valueType :db.type/ref]
[?e1 :db/cardinality :db.cardinality/one]
[?e1 :db/unique :db.unique/value]
[?e1 :db/isComponent true]
[?e1 :db/index true]]
"#
);
assert_matches!(
conn.transactions(),
r#"
[[[?e1 :db/ident :test/one ?tx1 true]
[?e1 :db/valueType :db.type/ref ?tx1 true]
[?e1 :db/cardinality :db.cardinality/one ?tx1 true]
[?e1 :db/unique :db.unique/value ?tx1 true]
[?e1 :db/isComponent true ?tx1 true]
[?e1 :db/index true ?tx1 true]
[?tx1 :db/txInstant ?ms ?tx1 true]]]
"#
);
}
#[test]
fn test_pop_in_sequence() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let partition_map_after_bootstrap = conn.partition_map.clone();
assert_eq!(
(65536..65538),
conn.partition_map.allocate_entids(":db.part/user", 2)
);
let tx_report0 = assert_transact!(
conn,
r#"[
{:db/id 65536 :db/ident :test/one :db/valueType :db.type/long :db/cardinality :db.cardinality/one :db/unique :db.unique/identity :db/index true}
{:db/id 65537 :db/ident :test/many :db/valueType :db.type/long :db/cardinality :db.cardinality/many}
]"#
);
let first = "[
[65536 :db/ident :test/one]
[65536 :db/valueType :db.type/long]
[65536 :db/cardinality :db.cardinality/one]
[65536 :db/unique :db.unique/identity]
[65536 :db/index true]
[65537 :db/ident :test/many]
[65537 :db/valueType :db.type/long]
[65537 :db/cardinality :db.cardinality/many]
]";
assert_matches!(conn.datoms(), first);
let partition_map0 = conn.partition_map.clone();
assert_eq!(
(65538..65539),
conn.partition_map.allocate_entids(":db.part/user", 1)
);
let tx_report1 = assert_transact!(
conn,
r#"[
[:db/add 65538 :test/one 1]
[:db/add 65538 :test/many 2]
[:db/add 65538 :test/many 3]
]"#
);
let schema1 = conn.schema.clone();
let partition_map1 = conn.partition_map.clone();
assert_matches!(
conn.last_transaction(),
"[[65538 :test/one 1 ?tx true]
[65538 :test/many 2 ?tx true]
[65538 :test/many 3 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]"
);
let second = "[
[65536 :db/ident :test/one]
[65536 :db/valueType :db.type/long]
[65536 :db/cardinality :db.cardinality/one]
[65536 :db/unique :db.unique/identity]
[65536 :db/index true]
[65537 :db/ident :test/many]
[65537 :db/valueType :db.type/long]
[65537 :db/cardinality :db.cardinality/many]
[65538 :test/one 1]
[65538 :test/many 2]
[65538 :test/many 3]
]";
assert_matches!(conn.datoms(), second);
let tx_report2 = assert_transact!(
conn,
r#"[
[:db/add 65538 :test/one 2]
[:db/add 65538 :test/many 2]
[:db/retract 65538 :test/many 3]
[:db/add 65538 :test/many 4]
]"#
);
let schema2 = conn.schema.clone();
assert_matches!(
conn.last_transaction(),
"[[65538 :test/one 1 ?tx false]
[65538 :test/one 2 ?tx true]
[65538 :test/many 3 ?tx false]
[65538 :test/many 4 ?tx true]
[?tx :db/txInstant ?ms ?tx true]]"
);
let third = "[
[65536 :db/ident :test/one]
[65536 :db/valueType :db.type/long]
[65536 :db/cardinality :db.cardinality/one]
[65536 :db/unique :db.unique/identity]
[65536 :db/index true]
[65537 :db/ident :test/many]
[65537 :db/valueType :db.type/long]
[65537 :db/cardinality :db.cardinality/many]
[65538 :test/one 2]
[65538 :test/many 2]
[65538 :test/many 4]
]";
assert_matches!(conn.datoms(), third);
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
tx_report2.tx_id..,
1,
)
.expect("moved timeline");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), second);
// Moving didn't change the schema.
assert_eq!(None, new_schema);
assert_eq!(conn.schema, schema2);
// But it did change the partition map.
assert_eq!(conn.partition_map, partition_map1);
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
tx_report1.tx_id..,
2,
)
.expect("moved timeline");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_matches!(conn.datoms(), first);
assert_eq!(None, new_schema);
assert_eq!(schema1, conn.schema);
assert_eq!(conn.partition_map, partition_map0);
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
tx_report0.tx_id..,
3,
)
.expect("moved timeline");
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_eq!(true, new_schema.is_some());
assert_eq!(bootstrap::bootstrap_schema(), conn.schema);
assert_eq!(partition_map_after_bootstrap, conn.partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
}
#[test]
fn test_move_range() {
let mut conn = TestConn::default();
conn.sanitized_partition_map();
let partition_map_after_bootstrap = conn.partition_map.clone();
assert_eq!(
(65536..65539),
conn.partition_map.allocate_entids(":db.part/user", 3)
);
let tx_report0 = assert_transact!(
conn,
r#"[
{:db/id 65536 :db/ident :test/one :db/valueType :db.type/long :db/cardinality :db.cardinality/one}
{:db/id 65537 :db/ident :test/many :db/valueType :db.type/long :db/cardinality :db.cardinality/many}
]"#
);
assert_transact!(
conn,
r#"[
[:db/add 65538 :test/one 1]
[:db/add 65538 :test/many 2]
[:db/add 65538 :test/many 3]
]"#
);
assert_transact!(
conn,
r#"[
[:db/add 65538 :test/one 2]
[:db/add 65538 :test/many 2]
[:db/retract 65538 :test/many 3]
[:db/add 65538 :test/many 4]
]"#
);
// Remove all of these transactions from the main timeline,
// ensure we get back to a "just bootstrapped" state.
let (new_schema, new_partition_map) = move_from_main_timeline(
&conn.sqlite,
&conn.schema,
conn.partition_map.clone(),
tx_report0.tx_id..,
1,
)
.expect("moved timeline");
update_conn(&mut conn, &new_schema, &new_partition_map);
update_conn(&mut conn, &new_schema, &new_partition_map);
assert_eq!(true, new_schema.is_some());
assert_eq!(bootstrap::bootstrap_schema(), conn.schema);
assert_eq!(partition_map_after_bootstrap, conn.partition_map);
assert_matches!(conn.datoms(), "[]");
assert_matches!(conn.transactions(), "[]");
}
}

1157
db/src/tx.rs Normal file

File diff suppressed because it is too large Load diff

75
db/src/tx_checking.rs Normal file
View file

@ -0,0 +1,75 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use std::collections::{BTreeMap, BTreeSet};
use core_traits::{Entid, TypedValue, ValueType};
use db_traits::errors::CardinalityConflict;
use crate::internal_types::AEVTrie;
/// Map from found [e a v] to expected type.
pub(crate) type TypeDisagreements = BTreeMap<(Entid, Entid, TypedValue), ValueType>;
/// Ensure that the given terms type check.
///
/// We try to be maximally helpful by yielding every malformed datom, rather than only the first.
/// In the future, we might change this choice, or allow the consumer to specify the robustness of
/// the type checking desired, since there is a cost to providing helpful diagnostics.
pub(crate) fn type_disagreements<'schema>(aev_trie: &AEVTrie<'schema>) -> TypeDisagreements {
let mut errors: TypeDisagreements = TypeDisagreements::default();
for (&(a, attribute), evs) in aev_trie {
for (&e, ref ars) in evs {
for v in ars.add.iter().chain(ars.retract.iter()) {
if attribute.value_type != v.value_type() {
errors.insert((e, a, v.clone()), attribute.value_type);
}
}
}
}
errors
}
/// Ensure that the given terms obey the cardinality restrictions of the given schema.
///
/// That is, ensure that any cardinality one attribute is added with at most one distinct value for
/// any specific entity (although that one value may be repeated for the given entity).
/// It is an error to:
///
/// - add two distinct values for the same cardinality one attribute and entity in a single transaction
/// - add and remove the same values for the same attribute and entity in a single transaction
///
/// We try to be maximally helpful by yielding every malformed set of datoms, rather than just the
/// first set, or even the first conflict. In the future, we might change this choice, or allow the
/// consumer to specify the robustness of the cardinality checking desired.
pub(crate) fn cardinality_conflicts<'schema>(
aev_trie: &AEVTrie<'schema>,
) -> Vec<CardinalityConflict> {
let mut errors = vec![];
for (&(a, attribute), evs) in aev_trie {
for (&e, ref ars) in evs {
if !attribute.multival && ars.add.len() > 1 {
let vs = ars.add.clone();
errors.push(CardinalityConflict::CardinalityOneAddConflict { e, a, vs });
}
let vs: BTreeSet<_> = ars.retract.intersection(&ars.add).cloned().collect();
if !vs.is_empty() {
errors.push(CardinalityConflict::AddRetractConflict { e, a, vs })
}
}
}
errors
}

211
db/src/tx_observer.rs Normal file
View file

@ -0,0 +1,211 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use std::sync::{Arc, Weak};
use std::sync::mpsc::{channel, Receiver, RecvError, Sender};
use std::thread;
use indexmap::IndexMap;
use core_traits::{Entid, TypedValue};
use mentat_core::Schema;
use edn::entities::OpType;
use db_traits::errors::Result;
use crate::types::AttributeSet;
use crate::watcher::TransactWatcher;
pub struct TxObserver {
#[allow(clippy::type_complexity)]
notify_fn: Arc<Box<dyn Fn(&str, IndexMap<&Entid, &AttributeSet>) + Send + Sync>>,
attributes: AttributeSet,
}
impl TxObserver {
pub fn new<F>(attributes: AttributeSet, notify_fn: F) -> TxObserver
where
F: Fn(&str, IndexMap<&Entid, &AttributeSet>) + 'static + Send + Sync,
{
TxObserver {
notify_fn: Arc::new(Box::new(notify_fn)),
attributes,
}
}
pub fn applicable_reports<'r>(
&self,
reports: &'r IndexMap<Entid, AttributeSet>,
) -> IndexMap<&'r Entid, &'r AttributeSet> {
reports
.into_iter()
.filter(|&(_txid, attrs)| !self.attributes.is_disjoint(attrs))
.collect()
}
fn notify(&self, key: &str, reports: IndexMap<&Entid, &AttributeSet>) {
(*self.notify_fn)(key, reports);
}
}
pub trait Command {
fn execute(&mut self);
}
pub struct TxCommand {
reports: IndexMap<Entid, AttributeSet>,
observers: Weak<IndexMap<String, Arc<TxObserver>>>,
}
impl TxCommand {
fn new(
observers: &Arc<IndexMap<String, Arc<TxObserver>>>,
reports: IndexMap<Entid, AttributeSet>,
) -> Self {
TxCommand {
reports,
observers: Arc::downgrade(observers),
}
}
}
impl Command for TxCommand {
fn execute(&mut self) {
if let Some(observers) = self.observers.upgrade() {
for (key, observer) in observers.iter() {
let applicable_reports = observer.applicable_reports(&self.reports);
if !applicable_reports.is_empty() {
observer.notify(&key, applicable_reports);
}
}
}
}
}
#[derive(Default)]
pub struct TxObservationService {
observers: Arc<IndexMap<String, Arc<TxObserver>>>,
executor: Option<Sender<Box<dyn Command + Send>>>,
}
impl TxObservationService {
pub fn new() -> Self {
TxObservationService {
observers: Arc::new(IndexMap::new()),
executor: None,
}
}
// For testing purposes
pub fn is_registered(&self, key: &str) -> bool {
self.observers.contains_key(key)
}
pub fn register(&mut self, key: String, observer: Arc<TxObserver>) {
Arc::make_mut(&mut self.observers).insert(key, observer);
}
pub fn deregister(&mut self, key: &str) {
Arc::make_mut(&mut self.observers).remove(key);
}
pub fn has_observers(&self) -> bool {
!self.observers.is_empty()
}
pub fn in_progress_did_commit(&mut self, txes: IndexMap<Entid, AttributeSet>) {
// Don't spawn a thread only to say nothing.
if !self.has_observers() {
return;
}
let executor = self.executor.get_or_insert_with(|| {
#[allow(clippy::type_complexity)]
let (tx, rx): (
Sender<Box<dyn Command + Send>>,
Receiver<Box<dyn Command + Send>>,
) = channel();
let mut worker = CommandExecutor::new(rx);
thread::spawn(move || {
worker.main();
});
tx
});
let cmd = Box::new(TxCommand::new(&self.observers, txes));
executor.send(cmd).unwrap();
}
}
impl Drop for TxObservationService {
fn drop(&mut self) {
self.executor = None;
}
}
#[derive(Default)]
pub struct InProgressObserverTransactWatcher {
collected_attributes: AttributeSet,
pub txes: IndexMap<Entid, AttributeSet>,
}
impl InProgressObserverTransactWatcher {
pub fn new() -> InProgressObserverTransactWatcher {
InProgressObserverTransactWatcher {
collected_attributes: Default::default(),
txes: Default::default(),
}
}
}
impl TransactWatcher for InProgressObserverTransactWatcher {
fn datom(&mut self, _op: OpType, _e: Entid, a: Entid, _v: &TypedValue) {
self.collected_attributes.insert(a);
}
fn done(&mut self, t: &Entid, _schema: &Schema) -> Result<()> {
let collected_attributes = ::std::mem::take(&mut self.collected_attributes);
self.txes.insert(*t, collected_attributes);
Ok(())
}
}
struct CommandExecutor {
receiver: Receiver<Box<dyn Command + Send>>,
}
impl CommandExecutor {
fn new(rx: Receiver<Box<dyn Command + Send>>) -> Self {
CommandExecutor { receiver: rx }
}
fn main(&mut self) {
loop {
match self.receiver.recv() {
Err(RecvError) => {
// "The recv operation can only fail if the sending half of a channel (or
// sync_channel) is disconnected, implying that no further messages will ever be
// received."
// No need to log here.
return;
}
Ok(mut cmd) => cmd.execute(),
}
}
}
}

230
db/src/types.rs Normal file
View file

@ -0,0 +1,230 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use std::collections::{BTreeMap, BTreeSet, HashMap};
use std::iter::FromIterator;
use std::ops::{Deref, DerefMut, Range};
extern crate mentat_core;
use core_traits::{Entid, TypedValue, ValueType};
pub use self::mentat_core::{DateTime, Schema, Utc};
use edn::entities::{EntityPlace, TempId};
use db_traits::errors;
/// Represents one partition of the entid space.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
#[cfg_attr(feature = "syncable", derive(Serialize, Deserialize))]
pub struct Partition {
/// The first entid in the partition.
pub start: Entid,
/// Maximum allowed entid in the partition.
pub end: Entid,
/// `true` if entids in the partition can be excised with `:db/excise`.
pub allow_excision: bool,
/// The next entid to be allocated in the partition.
/// Unless you must use this directly, prefer using provided setter and getter helpers.
pub(crate) next_entid_to_allocate: Entid,
}
impl Partition {
pub fn new(
start: Entid,
end: Entid,
next_entid_to_allocate: Entid,
allow_excision: bool,
) -> Partition {
assert!(
start <= next_entid_to_allocate && next_entid_to_allocate <= end,
"A partition represents a monotonic increasing sequence of entids."
);
Partition {
start,
end,
next_entid_to_allocate,
allow_excision,
}
}
pub fn contains_entid(&self, e: Entid) -> bool {
(e >= self.start) && (e < self.next_entid_to_allocate)
}
pub fn allows_entid(&self, e: Entid) -> bool {
(e >= self.start) && (e <= self.end)
}
pub fn next_entid(&self) -> Entid {
self.next_entid_to_allocate
}
pub fn set_next_entid(&mut self, e: Entid) {
assert!(
self.allows_entid(e),
"Partition index must be within its allocated space."
);
self.next_entid_to_allocate = e;
}
pub fn allocate_entids(&mut self, n: usize) -> Range<i64> {
let idx = self.next_entid();
self.set_next_entid(idx + n as i64);
idx..self.next_entid()
}
}
/// Map partition names to `Partition` instances.
#[derive(Clone, Debug, Default, Eq, Hash, Ord, PartialOrd, PartialEq)]
#[cfg_attr(feature = "syncable", derive(Serialize, Deserialize))]
pub struct PartitionMap(BTreeMap<String, Partition>);
impl Deref for PartitionMap {
type Target = BTreeMap<String, Partition>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for PartitionMap {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl FromIterator<(String, Partition)> for PartitionMap {
fn from_iter<T: IntoIterator<Item = (String, Partition)>>(iter: T) -> Self {
PartitionMap(iter.into_iter().collect())
}
}
/// Represents the metadata required to query from, or apply transactions to, a Mentat store.
///
/// See https://github.com/mozilla/mentat/wiki/Thoughts:-modeling-db-conn-in-Rust.
#[derive(Clone, Debug, Default, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub struct DB {
/// Map partition name->`Partition`.
///
/// TODO: represent partitions as entids.
pub partition_map: PartitionMap,
/// The schema of the store.
pub schema: Schema,
}
impl DB {
pub fn new(partition_map: PartitionMap, schema: Schema) -> DB {
DB {
partition_map,
schema,
}
}
}
/// A pair [a v] in the store.
///
/// Used to represent lookup-refs and [TEMPID a v] upserts as they are resolved.
pub type AVPair = (Entid, TypedValue);
/// Used to represent assertions and retractions.
pub(crate) type EAV = (Entid, Entid, TypedValue);
/// Map [a v] pairs to existing entids.
///
/// Used to resolve lookup-refs and upserts.
pub type AVMap<'a> = HashMap<&'a AVPair, Entid>;
// represents a set of entids that are correspond to attributes
pub type AttributeSet = BTreeSet<Entid>;
/// The transactor is tied to `edn::ValueAndSpan` right now, but in the future we'd like to support
/// `TypedValue` directly for programmatic use. `TransactableValue` encapsulates the interface
/// value types (i.e., values in the value place) need to support to be transacted.
pub trait TransactableValue: Clone {
/// Coerce this value place into the given type. This is where we perform schema-aware
/// coercion, for example coercing an integral value into a ref where appropriate.
fn into_typed_value(self, schema: &Schema, value_type: ValueType)
-> errors::Result<TypedValue>;
/// Make an entity place out of this value place. This is where we limit values in nested maps
/// to valid entity places.
fn into_entity_place(self) -> errors::Result<EntityPlace<Self>>;
fn as_tempid(&self) -> Option<TempId>;
}
#[cfg(test)]
mod tests {
use super::Partition;
#[test]
#[should_panic(expected = "A partition represents a monotonic increasing sequence of entids.")]
fn test_partition_limits_sanity1() {
Partition::new(100, 1000, 1001, true);
}
#[test]
#[should_panic(expected = "A partition represents a monotonic increasing sequence of entids.")]
fn test_partition_limits_sanity2() {
Partition::new(100, 1000, 99, true);
}
#[test]
#[should_panic(expected = "Partition index must be within its allocated space.")]
fn test_partition_limits_boundary1() {
let mut part = Partition::new(100, 1000, 100, true);
part.set_next_entid(2000);
}
#[test]
#[should_panic(expected = "Partition index must be within its allocated space.")]
fn test_partition_limits_boundary2() {
let mut part = Partition::new(100, 1000, 100, true);
part.set_next_entid(1001);
}
#[test]
#[should_panic(expected = "Partition index must be within its allocated space.")]
fn test_partition_limits_boundary3() {
let mut part = Partition::new(100, 1000, 100, true);
part.set_next_entid(99);
}
#[test]
#[should_panic(expected = "Partition index must be within its allocated space.")]
fn test_partition_limits_boundary4() {
let mut part = Partition::new(100, 1000, 100, true);
part.set_next_entid(-100);
}
#[test]
#[should_panic(expected = "Partition index must be within its allocated space.")]
fn test_partition_limits_boundary5() {
let mut part = Partition::new(100, 1000, 100, true);
part.allocate_entids(901); // One more than allowed.
}
#[test]
fn test_partition_limits_boundary6() {
let mut part = Partition::new(100, 1000, 100, true);
part.set_next_entid(100); // First entid that's allowed.
part.set_next_entid(101); // Just after first.
assert_eq!(101..111, part.allocate_entids(10));
part.set_next_entid(1000); // Last entid that's allowed.
part.set_next_entid(999); // Just before last.
}
}

408
db/src/upsert_resolution.rs Normal file
View file

@ -0,0 +1,408 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
//! This module implements the upsert resolution algorithm described at
//! https://github.com/mozilla/mentat/wiki/Transacting:-upsert-resolution-algorithm.
use std::collections::{BTreeMap, BTreeSet};
use indexmap;
use petgraph::unionfind;
use crate::internal_types::{
Population, TempIdHandle, TempIdMap, Term, TermWithTempIds, TermWithoutTempIds, TypedValueOr,
};
use crate::types::AVPair;
use db_traits::errors::{DbErrorKind, Result};
use mentat_core::util::Either::*;
use core_traits::{attribute, Attribute, Entid, TypedValue};
use crate::schema::SchemaBuilding;
use edn::entities::OpType;
use mentat_core::Schema;
/// A "Simple upsert" that looks like [:db/add TEMPID a v], where a is :db.unique/identity.
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
struct UpsertE(TempIdHandle, Entid, TypedValue);
/// A "Complex upsert" that looks like [:db/add TEMPID a OTHERID], where a is :db.unique/identity
#[derive(Clone, Debug, Eq, Hash, Ord, PartialOrd, PartialEq)]
struct UpsertEV(TempIdHandle, Entid, TempIdHandle);
/// A generation collects entities into populations at a single evolutionary step in the upsert
/// resolution evolution process.
///
/// The upsert resolution process is only concerned with [:db/add ...] entities until the final
/// entid allocations. That's why we separate into special simple and complex upsert types
/// immediately, and then collect the more general term types for final resolution.
#[derive(Clone, Debug, Default, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub(crate) struct Generation {
/// "Simple upserts" that look like [:db/add TEMPID a v], where a is :db.unique/identity.
upserts_e: Vec<UpsertE>,
/// "Complex upserts" that look like [:db/add TEMPID a OTHERID], where a is :db.unique/identity
upserts_ev: Vec<UpsertEV>,
/// Entities that look like:
/// - [:db/add TEMPID b OTHERID]. b may be :db.unique/identity if it has failed to upsert.
/// - [:db/add TEMPID b v]. b may be :db.unique/identity if it has failed to upsert.
/// - [:db/add e b OTHERID].
allocations: Vec<TermWithTempIds>,
/// Entities that upserted and no longer reference tempids. These assertions are guaranteed to
/// be in the store.
upserted: Vec<TermWithoutTempIds>,
/// Entities that resolved due to other upserts and no longer reference tempids. These
/// assertions may or may not be in the store.
resolved: Vec<TermWithoutTempIds>,
}
#[derive(Clone, Debug, Default, Eq, Hash, Ord, PartialOrd, PartialEq)]
pub(crate) struct FinalPopulations {
/// Upserts that upserted.
pub upserted: Vec<TermWithoutTempIds>,
/// Allocations that resolved due to other upserts.
pub resolved: Vec<TermWithoutTempIds>,
/// Allocations that required new entid allocations.
pub allocated: Vec<TermWithoutTempIds>,
}
impl Generation {
/// Split entities into a generation of populations that need to evolve to have their tempids
/// resolved or allocated, and a population of inert entities that do not reference tempids.
pub(crate) fn from<I>(terms: I, schema: &Schema) -> Result<(Generation, Population)>
where
I: IntoIterator<Item = TermWithTempIds>,
{
let mut generation = Generation::default();
let mut inert = vec![];
let is_unique = |a: Entid| -> Result<bool> {
let attribute: &Attribute = schema.require_attribute_for_entid(a)?;
Ok(attribute.unique == Some(attribute::Unique::Identity))
};
for term in terms.into_iter() {
match term {
Term::AddOrRetract(op, Right(e), a, Right(v)) => {
if op == OpType::Add && is_unique(a)? {
generation.upserts_ev.push(UpsertEV(e, a, v));
} else {
generation
.allocations
.push(Term::AddOrRetract(op, Right(e), a, Right(v)));
}
}
Term::AddOrRetract(op, Right(e), a, Left(v)) => {
if op == OpType::Add && is_unique(a)? {
generation.upserts_e.push(UpsertE(e, a, v));
} else {
generation
.allocations
.push(Term::AddOrRetract(op, Right(e), a, Left(v)));
}
}
Term::AddOrRetract(op, Left(e), a, Right(v)) => {
generation
.allocations
.push(Term::AddOrRetract(op, Left(e), a, Right(v)));
}
Term::AddOrRetract(op, Left(e), a, Left(v)) => {
inert.push(Term::AddOrRetract(op, Left(e), a, Left(v)));
}
}
}
Ok((generation, inert))
}
/// Return true if it's possible to evolve this generation further.
///
/// Note that there can be complex upserts but no simple upserts to help resolve them, and in
/// this case, we cannot evolve further.
pub(crate) fn can_evolve(&self) -> bool {
!self.upserts_e.is_empty()
}
/// Evolve this generation one step further by rewriting the existing :db/add entities using the
/// given temporary IDs.
///
/// TODO: Considering doing this in place; the function already consumes `self`.
pub(crate) fn evolve_one_step(self, temp_id_map: &TempIdMap) -> Generation {
let mut next = Generation::default();
// We'll iterate our own allocations to resolve more things, but terms that have already
// resolved stay resolved.
next.resolved = self.resolved;
for UpsertE(t, a, v) in self.upserts_e {
match temp_id_map.get(&*t) {
Some(&n) => next.upserted.push(Term::AddOrRetract(OpType::Add, n, a, v)),
None => {
next.allocations
.push(Term::AddOrRetract(OpType::Add, Right(t), a, Left(v)))
}
}
}
for UpsertEV(t1, a, t2) in self.upserts_ev {
match (temp_id_map.get(&*t1), temp_id_map.get(&*t2)) {
(Some(_), Some(&n2)) => {
// Even though we can resolve entirely, it's possible that the remaining upsert
// could conflict. Moving straight to resolved doesn't give us a chance to
// search the store for the conflict.
next.upserts_e.push(UpsertE(t1, a, TypedValue::Ref(n2.0)))
}
(None, Some(&n2)) => next.upserts_e.push(UpsertE(t1, a, TypedValue::Ref(n2.0))),
(Some(&n1), None) => {
next.allocations
.push(Term::AddOrRetract(OpType::Add, Left(n1), a, Right(t2)))
}
(None, None) => next.upserts_ev.push(UpsertEV(t1, a, t2)),
}
}
// There's no particular need to separate resolved from allocations right here and right
// now, although it is convenient.
for term in self.allocations {
// TODO: find an expression that destructures less? I still expect this to be efficient
// but it's a little verbose.
match term {
Term::AddOrRetract(op, Right(t1), a, Right(t2)) => {
match (temp_id_map.get(&*t1), temp_id_map.get(&*t2)) {
(Some(&n1), Some(&n2)) => {
next.resolved
.push(Term::AddOrRetract(op, n1, a, TypedValue::Ref(n2.0)))
}
(None, Some(&n2)) => next.allocations.push(Term::AddOrRetract(
op,
Right(t1),
a,
Left(TypedValue::Ref(n2.0)),
)),
(Some(&n1), None) => {
next.allocations
.push(Term::AddOrRetract(op, Left(n1), a, Right(t2)))
}
(None, None) => {
next.allocations
.push(Term::AddOrRetract(op, Right(t1), a, Right(t2)))
}
}
}
Term::AddOrRetract(op, Right(t), a, Left(v)) => match temp_id_map.get(&*t) {
Some(&n) => next.resolved.push(Term::AddOrRetract(op, n, a, v)),
None => next
.allocations
.push(Term::AddOrRetract(op, Right(t), a, Left(v))),
},
Term::AddOrRetract(op, Left(e), a, Right(t)) => match temp_id_map.get(&*t) {
Some(&n) => {
next.resolved
.push(Term::AddOrRetract(op, e, a, TypedValue::Ref(n.0)))
}
None => next
.allocations
.push(Term::AddOrRetract(op, Left(e), a, Right(t))),
},
Term::AddOrRetract(_, Left(_), _, Left(_)) => unreachable!(),
}
}
next
}
// Collect id->[a v] pairs that might upsert at this evolutionary step.
pub(crate) fn temp_id_avs(&self) -> Vec<(TempIdHandle, AVPair)> {
let mut temp_id_avs: Vec<(TempIdHandle, AVPair)> = vec![];
// TODO: map/collect.
for &UpsertE(ref t, ref a, ref v) in &self.upserts_e {
// TODO: figure out how to make this less expensive, i.e., don't require
// clone() of an arbitrary value.
temp_id_avs.push((t.clone(), (*a, v.clone())));
}
temp_id_avs
}
/// Evolve potential upserts that haven't resolved into allocations.
pub(crate) fn allocate_unresolved_upserts(&mut self) -> Result<()> {
let mut upserts_ev = vec![];
::std::mem::swap(&mut self.upserts_ev, &mut upserts_ev);
self.allocations.extend(
upserts_ev.into_iter().map(|UpsertEV(t1, a, t2)| {
Term::AddOrRetract(OpType::Add, Right(t1), a, Right(t2))
}),
);
Ok(())
}
/// After evolution is complete, yield the set of tempids that require entid allocation.
///
/// Some of the tempids may be identified, so we also provide a map from tempid to a dense set
/// of contiguous integer labels.
pub(crate) fn temp_ids_in_allocations(
&self,
schema: &Schema,
) -> Result<BTreeMap<TempIdHandle, usize>> {
assert!(self.upserts_e.is_empty(), "All upserts should have been upserted, resolved, or moved to the allocated population!");
assert!(self.upserts_ev.is_empty(), "All upserts should have been upserted, resolved, or moved to the allocated population!");
let mut temp_ids: BTreeSet<TempIdHandle> = BTreeSet::default();
let mut tempid_avs: BTreeMap<(Entid, TypedValueOr<TempIdHandle>), Vec<TempIdHandle>> =
BTreeMap::default();
for term in self.allocations.iter() {
match term {
Term::AddOrRetract(OpType::Add, Right(ref t1), a, Right(ref t2)) => {
temp_ids.insert(t1.clone());
temp_ids.insert(t2.clone());
let attribute: &Attribute = schema.require_attribute_for_entid(*a)?;
if attribute.unique == Some(attribute::Unique::Identity) {
tempid_avs
.entry((*a, Right(t2.clone())))
.or_insert_with(Vec::new)
.push(t1.clone());
}
}
Term::AddOrRetract(OpType::Add, Right(ref t), a, ref x @ Left(_)) => {
temp_ids.insert(t.clone());
let attribute: &Attribute = schema.require_attribute_for_entid(*a)?;
if attribute.unique == Some(attribute::Unique::Identity) {
tempid_avs
.entry((*a, x.clone()))
.or_insert_with(Vec::new)
.push(t.clone());
}
}
Term::AddOrRetract(OpType::Add, Left(_), _, Right(ref t)) => {
temp_ids.insert(t.clone());
}
Term::AddOrRetract(OpType::Add, Left(_), _, Left(_)) => unreachable!(),
Term::AddOrRetract(OpType::Retract, _, _, _) => {
// [:db/retract ...] entities never allocate entids; they have to resolve due to
// other upserts (or they fail the transaction).
}
}
}
// Now we union-find all the known tempids. Two tempids are unioned if they both appear as
// the entity of an `[a v]` upsert, including when the value column `v` is itself a tempid.
let mut uf = unionfind::UnionFind::new(temp_ids.len());
// The union-find implementation from petgraph operates on contiguous indices, so we need to
// maintain the map from our tempids to indices ourselves.
let temp_ids: BTreeMap<TempIdHandle, usize> = temp_ids
.into_iter()
.enumerate()
.map(|(i, tempid)| (tempid, i))
.collect();
debug!(
"need to label tempids aggregated using tempid_avs {:?}",
tempid_avs
);
for vs in tempid_avs.values() {
if let Some(&first_index) = vs.first().and_then(|first| temp_ids.get(first)) {
for tempid in vs {
temp_ids.get(tempid).map(|&i| uf.union(first_index, i));
}
}
}
debug!("union-find aggregation {:?}", uf.clone().into_labeling());
// Now that we have aggregated tempids, we need to label them using the smallest number of
// contiguous labels possible.
let mut tempid_map: BTreeMap<TempIdHandle, usize> = BTreeMap::default();
let mut dense_labels: indexmap::IndexSet<usize> = indexmap::IndexSet::default();
// We want to produce results that are as deterministic as possible, so we allocate labels
// for tempids in sorted order. This has the effect of making "a" allocate before "b",
// which is pleasant for testing.
for (tempid, tempid_index) in temp_ids {
let rep = uf.find_mut(tempid_index);
dense_labels.insert(rep);
dense_labels
.get_full(&rep)
.map(|(dense_index, _)| tempid_map.insert(tempid.clone(), dense_index));
}
debug!(
"labeled tempids using {} labels: {:?}",
dense_labels.len(),
tempid_map
);
Ok(tempid_map)
}
/// After evolution is complete, use the provided allocated entids to segment `self` into
/// populations, each with no references to tempids.
pub(crate) fn into_final_populations(
self,
temp_id_map: &TempIdMap,
) -> Result<FinalPopulations> {
assert!(self.upserts_e.is_empty());
assert!(self.upserts_ev.is_empty());
let mut populations = FinalPopulations::default();
populations.upserted = self.upserted;
populations.resolved = self.resolved;
for term in self.allocations {
let allocated = match term {
// TODO: consider require implementing require on temp_id_map.
Term::AddOrRetract(op, Right(t1), a, Right(t2)) => {
match (op, temp_id_map.get(&*t1), temp_id_map.get(&*t2)) {
(op, Some(&n1), Some(&n2)) => Term::AddOrRetract(op, n1, a, TypedValue::Ref(n2.0)),
(OpType::Add, _, _) => unreachable!(), // This is a coding error -- every tempid in a :db/add entity should resolve or be allocated.
(OpType::Retract, _, _) => bail!(DbErrorKind::NotYetImplemented(format!("[:db/retract ...] entity referenced tempid that did not upsert: one of {}, {}", t1, t2))),
}
}
Term::AddOrRetract(op, Right(t), a, Left(v)) => {
match (op, temp_id_map.get(&*t)) {
(op, Some(&n)) => Term::AddOrRetract(op, n, a, v),
(OpType::Add, _) => unreachable!(), // This is a coding error.
(OpType::Retract, _) => bail!(DbErrorKind::NotYetImplemented(format!(
"[:db/retract ...] entity referenced tempid that did not upsert: {}",
t
))),
}
}
Term::AddOrRetract(op, Left(e), a, Right(t)) => {
match (op, temp_id_map.get(&*t)) {
(op, Some(&n)) => Term::AddOrRetract(op, e, a, TypedValue::Ref(n.0)),
(OpType::Add, _) => unreachable!(), // This is a coding error.
(OpType::Retract, _) => bail!(DbErrorKind::NotYetImplemented(format!(
"[:db/retract ...] entity referenced tempid that did not upsert: {}",
t
))),
}
}
Term::AddOrRetract(_, Left(_), _, Left(_)) => unreachable!(), // This is a coding error -- these should not be in allocations.
};
populations.allocated.push(allocated);
}
Ok(populations)
}
}

46
db/src/watcher.rs Normal file
View file

@ -0,0 +1,46 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
// A trivial interface for extracting information from a transact as it happens.
// We have two situations in which we need to do this:
//
// - InProgress and Conn both have attribute caches. InProgress's is different from Conn's,
// because it needs to be able to roll back. These wish to see changes in a certain set of
// attributes in order to synchronously update the cache during a write.
// - When observers are registered we want to flip some flags as writes occur so that we can
// notifying them outside the transaction.
use core_traits::{Entid, TypedValue};
use mentat_core::Schema;
use edn::entities::OpType;
use db_traits::errors::Result;
pub trait TransactWatcher {
fn datom(&mut self, op: OpType, e: Entid, a: Entid, v: &TypedValue);
/// Only return an error if you want to interrupt the transact!
/// Called with the schema _prior to_ the transact -- any attributes or
/// attribute changes transacted during this transact are not reflected in
/// the schema.
fn done(&mut self, t: &Entid, schema: &Schema) -> Result<()>;
}
pub struct NullWatcher();
impl TransactWatcher for NullWatcher {
fn datom(&mut self, _op: OpType, _e: Entid, _a: Entid, _v: &TypedValue) {}
fn done(&mut self, _t: &Entid, _schema: &Schema) -> Result<()> {
Ok(())
}
}

122
db/tests/value_tests.rs Normal file
View file

@ -0,0 +1,122 @@
// Copyright 2016 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
extern crate core_traits;
extern crate edn;
extern crate mentat_db;
extern crate ordered_float;
extern crate rusqlite;
use ordered_float::OrderedFloat;
use edn::symbols;
use core_traits::{TypedValue, ValueType};
use mentat_db::db::TypedSQLValue;
// It's not possible to test to_sql_value_pair since rusqlite::ToSqlOutput doesn't implement
// PartialEq.
#[test]
fn test_from_sql_value_pair() {
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Integer(1234), 0).unwrap(),
TypedValue::Ref(1234)
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Integer(0), 1).unwrap(),
TypedValue::Boolean(false)
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Integer(1), 1).unwrap(),
TypedValue::Boolean(true)
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Integer(0), 5).unwrap(),
TypedValue::Long(0)
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Integer(1234), 5).unwrap(),
TypedValue::Long(1234)
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Real(0.0), 5).unwrap(),
TypedValue::Double(OrderedFloat(0.0))
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Real(0.5), 5).unwrap(),
TypedValue::Double(OrderedFloat(0.5))
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Text(":db/keyword".into()), 10)
.unwrap(),
TypedValue::typed_string(":db/keyword")
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Text(":db/keyword".into()), 13)
.unwrap(),
TypedValue::typed_ns_keyword("db", "keyword")
);
assert_eq!(
TypedValue::from_sql_value_pair(rusqlite::types::Value::Blob(vec![1, 2, 3, 42]), 15)
.unwrap(),
TypedValue::Bytes((vec![1, 2, 3, 42]).into())
);
}
#[test]
fn test_to_edn_value_pair() {
assert_eq!(
TypedValue::Ref(1234).to_edn_value_pair(),
(edn::Value::Integer(1234), ValueType::Ref)
);
assert_eq!(
TypedValue::Boolean(false).to_edn_value_pair(),
(edn::Value::Boolean(false), ValueType::Boolean)
);
assert_eq!(
TypedValue::Boolean(true).to_edn_value_pair(),
(edn::Value::Boolean(true), ValueType::Boolean)
);
assert_eq!(
TypedValue::Long(0).to_edn_value_pair(),
(edn::Value::Integer(0), ValueType::Long)
);
assert_eq!(
TypedValue::Long(1234).to_edn_value_pair(),
(edn::Value::Integer(1234), ValueType::Long)
);
assert_eq!(
TypedValue::Double(OrderedFloat(0.0)).to_edn_value_pair(),
(edn::Value::Float(OrderedFloat(0.0)), ValueType::Double)
);
assert_eq!(
TypedValue::Double(OrderedFloat(0.5)).to_edn_value_pair(),
(edn::Value::Float(OrderedFloat(0.5)), ValueType::Double)
);
assert_eq!(
TypedValue::typed_string(":db/keyword").to_edn_value_pair(),
(edn::Value::Text(":db/keyword".into()), ValueType::String)
);
assert_eq!(
TypedValue::typed_ns_keyword("db", "keyword").to_edn_value_pair(),
(
edn::Value::Keyword(symbols::Keyword::namespaced("db", "keyword")),
ValueType::Keyword
)
);
}

3
docs/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
_site
.sass-cache
.jekyll-metadata

24
docs/404.html Normal file
View file

@ -0,0 +1,24 @@
---
layout: default
---
<style type="text/css" media="screen">
.container {
margin: 10px auto;
max-width: 600px;
text-align: center;
}
h1 {
margin: 30px 0;
font-size: 4em;
line-height: 1;
letter-spacing: -1px;
}
</style>
<div class="container">
<h1>404</h1>
<p><strong>Page not found :(</strong></p>
<p>The requested page could not be found.</p>
</div>

View file

@ -1,9 +1,17 @@
# How to contribute to Datomish
---
layout: page
title: Contributing
permalink: /contributing/
---
# How to contribute to Project Mentat
This project is very new, so we'll probably revise these guidelines. Please
comment on a bug before putting significant effort in, if you'd like to
contribute.
You probably want to quickly read the [front page of the wiki](https://github.com/mozilla/mentat/wiki) to get up to speed.
## Guidelines
* Follow the Style Guide (see below).
@ -19,7 +27,7 @@ description including any additional information that might help future
spelunkers (see below).
```
Frobnicate the URL bazzer before flattening pilchard, r=mossop,rnewman. Fixes #6.
Frobnicate the URL bazzer before flattening pilchard. (#123) r=mossop,rnewman.
The frobnication method used is as described in Podder's Miscellany, page 15.
Note that this pull request doesn't include tests, because we're bad people.
@ -29,10 +37,10 @@ Signed-off-by: Random J Developer <random@developer.example.org>
## Example
* Fork this repo at [github.com/mozilla/datomish](https://github.com/mozilla/datomish#fork-destination-box).
* Fork this repo at [github.com/mozilla/mentat](https://github.com/mozilla/mentat#fork-destination-box).
* Clone your fork locally. Make sure you use the correct clone URL.
```
git clone git@github.com:YOURNAME/datomish.git
git clone git@github.com:YOURNAME/mentat.git
```
Check your remotes:
```
@ -40,7 +48,7 @@ git remote --verbose
```
Make sure you have an upstream remote defined:
```
git remote add upstream https://github.com/mozilla/datomish
git remote add upstream https://github.com/mozilla/mentat
```
* Create a new branch to start working on a bug or feature:
@ -56,10 +64,10 @@ git commit --signoff --message "Some commit message"
* Rebase your work during development and before submitting a pull request,
avoiding merge commits, such that your commits are a logical sequence to
read rather than a record of your fevered typing.
* Make sure you're on the correct branch and are pulling from the correct upstream:
* Make sure you're on the correct branch and are pulling from the correct upstream (currently `rust`):
```
git checkout some-new-branch
git pull upstream master --rebase
git pull upstream rust --rebase
```
Or using `git reset --soft` (as described in [a tale of three trees](http://www.infoq.com/presentations/A-Tale-of-Three-Trees))
@ -114,11 +122,43 @@ git commit --amend --reset-author --no-edit
# Style Guide
Our Rust code approximately follows the [Rust style guide](https://github.com/rust-lang-nursery/fmt-rfcs/blob/master/guide/guide.md). We use four-space indents, with categorized and alphabetized imports; see the examples in the tree. We try to follow [these guidelines](https://aturon.github.io/), too.
We do not automatically use `rustfmt` because it tends to make code incrementally worse, but you should be prepared to consider its suggestions.
An example of 'good' Rust code, omitting the license block:
```rust
#![allow(…)]
extern crate foo;
use std::borrow::Borrow;
use std::error::Error;
use std::iter::{once, repeat};
use rusqlite;
use mentat_core::{
Attribute,
AttributeBitFlags,
Entid,
};
type MyError = Box<Error + Send + Sync>;
pub type Thing = Borrow<String>;
pub fn foo_thing(x: Thing) -> Result<(), MyError> {
// Do things here.
}
```
Our JavaScript code follows the [airbnb style](https://github.com/airbnb/javascript)
with a [few exceptions](../../blob/master/.eslintrc). The precise rules are
likely to change a little as we get started so for now let eslint be your guide.
Our ClojureScript code follows… well, no guide so far.
Our ClojureScript code (no longer live) doesn't follow a specific style guide.
# How to sign-off your commits
@ -160,13 +200,13 @@ then you just add a line saying
Signed-off-by: Random J Developer <random@developer.example.org>
using your real name (sorry, no pseudonyms or anonymous contributions.)
using your real name (sorry, no pseudonyms or anonymous contributions).
If you're using the command line, you can get this done automatically with
$ git commit --signoff
Some GUIs (e.g. SourceTree) have an option to automatically sign commits.
Some GUIs (_e.g._, SourceTree) have an option to automatically sign commits.
If you need to slightly modify patches you receive in order to merge them,
because the code is not exactly the same in your tree and the submitters'.
@ -174,11 +214,11 @@ If you stick strictly to rule (c), you should ask the submitter to submit, but
this is a totally counter-productive waste of time and energy.
Rule (b) allows you to adjust the code, but then it is very impolite to change
one submitter's code and make them endorse your bugs. To solve this problem,
it is recommended that you add a line between the last Signed-off-by header and
it is recommended that you add a line between the last `Signed-off-by` header and
yours, indicating the nature of your changes. While there is nothing mandatory
about this, it seems like prepending the description with your mail and/or name,
all enclosed in square brackets, is noticeable enough to make it obvious that
you are responsible for last-minute changes. Example :
you are responsible for last-minute changes. Example:
Signed-off-by: Random J Developer <random@developer.example.org>
[lucky@maintainer.example.org: struct foo moved from foo.c to foo.h]
@ -187,5 +227,5 @@ you are responsible for last-minute changes. Example :
This practice is particularly helpful if you maintain a stable branch and
want at the same time to credit the author, track changes, merge the fix,
and protect the submitter from complaints. Note that under no circumstances
can you change the author's identity (the From header), as it is the one
can you change the author's identity (the `From` header), as it is the one
which appears in the change-log.

32
docs/Gemfile Normal file
View file

@ -0,0 +1,32 @@
source "https://rubygems.org"
# Hello! This is where you manage which Jekyll version is used to run.
# When you want to use a different version, change it below, save the
# file and run `bundle install`. Run Jekyll with `bundle exec`, like so:
#
# bundle exec jekyll serve
#
# This will help ensure the proper Jekyll version is running.
# Happy Jekylling!
# gem "jekyll", "~> 3.7.3"
# This is the default theme for new Jekyll sites. You may change this to anything you like.
gem "minima", "~> 2.5.1"
# If you want to use GitHub Pages, remove the "gem "jekyll"" above and
# uncomment the line below. To upgrade, run `bundle update github-pages`.
# gem "github-pages", group: :jekyll_plugins
# If you have any plugins, put them here!
group :jekyll_plugins do
gem "jekyll-feed", "~> 0.15.1"
gem "github-pages", "~> 215"
gem "jekyll-commonmark-ghpages", "~> 0.1.6"
end
# Windows does not include zoneinfo files, so bundle the tzinfo-data gem
gem "tzinfo-data", platforms: [:mingw, :mswin, :x64_mingw, :jruby]
# Performance-booster for watching directories on Windows
gem "wdm", "~> 0.1.0" if Gem.win_platform?

277
docs/Gemfile.lock Normal file
View file

@ -0,0 +1,277 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (6.0.4)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 0.7, < 2)
minitest (~> 5.1)
tzinfo (~> 1.1)
zeitwerk (~> 2.2, >= 2.2.2)
addressable (2.8.0)
public_suffix (>= 2.0.2, < 5.0)
coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.11.1)
colorator (1.1.0)
commonmarker (0.17.13)
ruby-enum (~> 0.5)
concurrent-ruby (1.1.9)
dnsruby (1.61.7)
simpleidn (~> 0.1)
em-websocket (0.5.2)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
ethon (0.14.0)
ffi (>= 1.15.0)
eventmachine (1.2.7)
execjs (2.8.1)
faraday (1.4.3)
faraday-em_http (~> 1.0)
faraday-em_synchrony (~> 1.0)
faraday-excon (~> 1.1)
faraday-net_http (~> 1.0)
faraday-net_http_persistent (~> 1.1)
multipart-post (>= 1.2, < 3)
ruby2_keywords (>= 0.0.4)
faraday-em_http (1.0.0)
faraday-em_synchrony (1.0.0)
faraday-excon (1.1.0)
faraday-net_http (1.0.1)
faraday-net_http_persistent (1.1.0)
ffi (1.15.3)
forwardable-extended (2.6.0)
gemoji (3.0.1)
github-pages (215)
github-pages-health-check (= 1.17.2)
jekyll (= 3.9.0)
jekyll-avatar (= 0.7.0)
jekyll-coffeescript (= 1.1.1)
jekyll-commonmark-ghpages (= 0.1.6)
jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.15.1)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.13.0)
jekyll-mentions (= 1.6.0)
jekyll-optional-front-matter (= 0.3.2)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.3.0)
jekyll-redirect-from (= 0.16.0)
jekyll-relative-links (= 0.6.1)
jekyll-remote-theme (= 0.4.3)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.7.1)
jekyll-sitemap (= 1.4.0)
jekyll-swiss (= 1.0.0)
jekyll-theme-architect (= 0.1.1)
jekyll-theme-cayman (= 0.1.1)
jekyll-theme-dinky (= 0.1.1)
jekyll-theme-hacker (= 0.1.2)
jekyll-theme-leap-day (= 0.1.1)
jekyll-theme-merlot (= 0.1.1)
jekyll-theme-midnight (= 0.1.1)
jekyll-theme-minimal (= 0.1.1)
jekyll-theme-modernist (= 0.1.1)
jekyll-theme-primer (= 0.5.4)
jekyll-theme-slate (= 0.1.1)
jekyll-theme-tactile (= 0.1.1)
jekyll-theme-time-machine (= 0.1.1)
jekyll-titles-from-headings (= 0.5.3)
jemoji (= 0.12.0)
kramdown (= 2.3.1)
kramdown-parser-gfm (= 1.1.0)
liquid (= 4.0.3)
mercenary (~> 0.3)
minima (= 2.5.1)
nokogiri (>= 1.10.4, < 2.0)
rouge (= 3.26.0)
terminal-table (~> 1.4)
github-pages-health-check (1.17.2)
addressable (~> 2.3)
dnsruby (~> 1.60)
octokit (~> 4.0)
public_suffix (>= 2.0.2, < 5.0)
typhoeus (~> 1.3)
html-pipeline (2.14.0)
activesupport (>= 2)
nokogiri (>= 1.4)
http_parser.rb (0.6.0)
i18n (0.9.5)
concurrent-ruby (~> 1.0)
jekyll (3.9.0)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 0.7)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 2.0)
kramdown (>= 1.17, < 3)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 4)
safe_yaml (~> 1.0)
jekyll-avatar (0.7.0)
jekyll (>= 3.0, < 5.0)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
jekyll-commonmark (1.3.1)
commonmarker (~> 0.14)
jekyll (>= 3.7, < 5.0)
jekyll-commonmark-ghpages (0.1.6)
commonmarker (~> 0.17.6)
jekyll-commonmark (~> 1.2)
rouge (>= 2.0, < 4.0)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.15.1)
jekyll (>= 3.7, < 5.0)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-github-metadata (2.13.0)
jekyll (>= 3.4, < 5.0)
octokit (~> 4.0, != 4.4.0)
jekyll-mentions (1.6.0)
html-pipeline (~> 2.3)
jekyll (>= 3.7, < 5.0)
jekyll-optional-front-matter (0.3.2)
jekyll (>= 3.0, < 5.0)
jekyll-paginate (1.1.0)
jekyll-readme-index (0.3.0)
jekyll (>= 3.0, < 5.0)
jekyll-redirect-from (0.16.0)
jekyll (>= 3.3, < 5.0)
jekyll-relative-links (0.6.1)
jekyll (>= 3.3, < 5.0)
jekyll-remote-theme (0.4.3)
addressable (~> 2.0)
jekyll (>= 3.5, < 5.0)
jekyll-sass-converter (>= 1.0, <= 3.0.0, != 2.0.0)
rubyzip (>= 1.3.0, < 3.0)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-seo-tag (2.7.1)
jekyll (>= 3.8, < 5.0)
jekyll-sitemap (1.4.0)
jekyll (>= 3.7, < 5.0)
jekyll-swiss (1.0.0)
jekyll-theme-architect (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-cayman (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-dinky (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-hacker (0.1.2)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-leap-day (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-merlot (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-midnight (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-minimal (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-modernist (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-primer (0.5.4)
jekyll (> 3.5, < 5.0)
jekyll-github-metadata (~> 2.9)
jekyll-seo-tag (~> 2.0)
jekyll-theme-slate (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-tactile (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-time-machine (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.3)
jekyll (>= 3.3, < 5.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
jemoji (0.12.0)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (>= 3.0, < 5.0)
kramdown (2.3.1)
rexml
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.3)
listen (3.5.1)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.3.6)
mini_portile2 (2.6.1)
minima (2.5.1)
jekyll (>= 3.5, < 5.0)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.14.4)
multipart-post (2.1.1)
nokogiri (1.12.5)
mini_portile2 (~> 2.6.1)
racc (~> 1.4)
octokit (4.21.0)
faraday (>= 0.9)
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (4.0.6)
racc (1.5.2)
rb-fsevent (0.11.0)
rb-inotify (0.10.1)
ffi (~> 1.0)
rexml (3.2.5)
rouge (3.26.0)
ruby-enum (0.9.0)
i18n
ruby2_keywords (0.0.4)
rubyzip (2.3.0)
safe_yaml (1.0.5)
sass (3.7.4)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
sawyer (0.8.2)
addressable (>= 2.3.5)
faraday (> 0.8, < 2.0)
simpleidn (0.2.1)
unf (~> 0.1.4)
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
thread_safe (0.3.6)
typhoeus (1.4.0)
ethon (>= 0.9.0)
tzinfo (1.2.9)
thread_safe (~> 0.1)
unf (0.1.4)
unf_ext
unf_ext (0.0.7.7)
unicode-display_width (1.7.0)
zeitwerk (2.4.2)
PLATFORMS
ruby
DEPENDENCIES
github-pages (~> 215)
jekyll-commonmark-ghpages (~> 0.1.6)
jekyll-feed (~> 0.15.1)
minima (~> 2.5.1)
tzinfo-data
BUNDLED WITH
2.2.21

18
docs/README.md Normal file
View file

@ -0,0 +1,18 @@
# Project Mentat Documentation Site
This site is a place to provide the users of Mentat with all the documentation, examples and tutorials required in order to use Mentat inside a project.
This site will contain the following:
- API Documentation for Mentat and it's SDKs.
- Tutorials for cross compilation of Mentat for other platforms. (Coming)
- Examples of how to design data for storage in Mentat.
- Examples of how to use Mentat and it's SDKs. (Coming)
- Quick Start Guides for installing and using Mentat. (Coming)
# Build and run locally
1. Install [Jekyll](https://jekyllrb.com/docs/installation/)
2. `cd docs`
3. `bundle exec jekyll serve --incremental`
4. open local docs site at http://127.0.0.1:4000/

43
docs/_config.yml Normal file
View file

@ -0,0 +1,43 @@
# Welcome to Jekyll!
#
# This config file is meant for settings that affect your whole blog, values
# which you are expected to set up once and rarely edit after that. If you find
# yourself editing this file very often, consider using Jekyll's data files
# feature for the data you need to update frequently.
#
# For technical reasons, this file is *NOT* reloaded automatically when you use
# 'bundle exec jekyll serve'. If you change this file, please restart the server process.
# Site settings
# These are used to personalize your new site. If you look in the HTML files,
# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on.
# You can create any custom variable you would like, and they will be accessible
# in the templates via {{ site.myvariable }}.
title: Mentat
description: >- # this means to ignore newlines until "baseurl:"
Project Mentat is a persistent, embedded knowledge base. It draws heavily on DataScript and Datomic.
Mentat is intended to be a flexible relational (not key-value, not document-oriented) store that makes
it easy to describe, grow, and reuse your domain schema.
baseurl: /mentat # the subpath of your site, e.g. /blog
url: https://mozilla.github.io # the base hostname & protocol for your site, e.g. http://example.com
api_heading: API Documentation
list_title: Examples
# Build settings
markdown: kramdown
theme: minima
plugins:
- jekyll-feed
# Exclude from processing.
# The following items will not be processed, by default. Create a custom list
# to override the default setting.
# exclude:
# - Gemfile
# - Gemfile.lock
# - node_modules
# - vendor/bundle/
# - vendor/cache/
# - vendor/gems/
# - vendor/ruby/

View file

@ -0,0 +1,24 @@
<footer class="site-footer h-card">
<data class="u-url" href="{{ "/" | relative_url }}"></data>
<div class="wrapper">
<!-- <h2 class="footer-heading"></h2> -->
<div class="footer-col-wrapper">
<div class="footer-col footer-col-1">
<ul class="contact-list">
<li class="p-name">{{ site.title }}</li>
</ul>
</div>
<div class="footer-col footer-col-2">
</div>
<div class="footer-col footer-col-3">
<p>Project Mentat is currently licensed under the Apache License v2.0.</p>
</div>
</div>
</div>
</footer>

View file

@ -0,0 +1,30 @@
<header class="site-header" role="banner">
<div class="wrapper">
{%- assign default_paths = site.pages | map: "path" -%}
{%- assign page_paths = site.header_pages | default: default_paths -%}
<a class="site-title" rel="author" href="{{ "/" | relative_url }}">{{ site.title | escape }}</a>
{%- if page_paths -%}
<nav class="site-nav">
<input type="checkbox" id="nav-trigger" class="nav-trigger" />
<label for="nav-trigger">
<span class="menu-icon">
<svg viewBox="0 0 18 15" width="18px" height="15px">
<path d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.032C17.335,0,18,0.665,18,1.484L18,1.484z M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.032C17.335,6.031,18,6.696,18,7.516L18,7.516z M18,13.516C18,14.335,17.335,15,16.516,15H1.484 C0.665,15,0,14.335,0,13.516l0,0c0-0.82,0.665-1.483,1.484-1.483h15.032C17.335,12.031,18,12.695,18,13.516L18,13.516z"/>
</svg>
</span>
</label>
<div class="trigger">
{%- for path in page_paths -%}
{%- assign my_page = site.pages | where: "path", path | first -%}
{%- if my_page.title -%}
<a class="page-link" href="{{ my_page.url | relative_url }}">{{ my_page.title | escape }}</a>
{%- endif -%}
{%- endfor -%}
</div>
</nav>
{%- endif -%}
</div>
</header>

21
docs/_layouts/home.html Normal file
View file

@ -0,0 +1,21 @@
---
layout: default
---
<div class="home">
{{ content }}
{%- if site.posts.size > 0 -%}
{% assign posts_by_cat = site.posts | group_by:"category" %}
{% for category in posts_by_cat %}
<h2>{{category.name | capitalize}}</h2>
<ul class="post-list">
{% for post in category.items %}
<li><a href="{{post.url | relative_url}}">{{post.title | escape}}</a></li>
{% endfor %}
</ul>
{% endfor %}
{%- endif -%}
</div>

14
docs/_layouts/page.html Normal file
View file

@ -0,0 +1,14 @@
---
layout: default
---
<article class="post">
<header class="post-header">
<h1 class="post-title">{{ page.title | escape }}</h1>
</header>
<div class="post-content">
{{ content }}
</div>
</article>

27
docs/_layouts/post.html Normal file
View file

@ -0,0 +1,27 @@
---
layout: default
---
<article class="post h-entry" itemscope itemtype="http://schema.org/BlogPosting">
<header class="post-header">
<h1 class="post-title p-name" itemprop="name headline">{{ page.title | escape }}</h1>
<p class="post-meta">
<time class="dt-published" datetime="{{ page.date | date_to_xmlschema }}" itemprop="datePublished">
{%- assign date_format = site.minima.date_format | default: "%b %-d, %Y" -%}
{{ page.date | date: date_format }}
</time>
{%- if page.author -%}
<span itemprop="author" itemscope itemtype="http://schema.org/Person"><span class="p-author h-card" itemprop="name">{{ page.author }}</span></span>
{%- endif -%}</p>
</header>
<div class="post-content e-content" itemprop="articleBody">
{{ content }}
</div>
{%- if site.disqus.shortname -%}
{%- include disqus_comments.html -%}
{%- endif -%}
<a class="u-url" href="{{ page.url | relative_url }}" hidden></a>
</article>

View file

@ -0,0 +1,396 @@
---
layout: post
list_title: Examples
title: "Modeling data using Mentat"
date: 2018-04-17 16:07:37 +0100
category: examples
---
# Worked examples of modeling data using Mentat
Used correctly, Mentat makes it easy for you to grow to accommodate new kinds of data, for data to synchronize between devices, for multiple consumers to share data, and even for errors to be fixed.
But what does "correctly" mean?
The following discussion and set of worked examples aim to help. During discussion sections a simplified syntax is used for schema examples.
## Principles
### Think about the domain, not about your UI
Given a set of mockups, or an MVP list of requirements, it's easy to leap into defining a data model that supports exactly those things. In doing so we will likely end up with a data model that can't support future capabilities, or that has crucial mismatches with the real world.
For example, one might design a contact manager UI like macOS's — a list of string fields for a person:
* First name
* Last name
* Address line 1
* Address line 2
* Phone
* _etc._
We might model this in Mentat as simple value properties:
```edn
[:person/name :db.type/string :db.cardinality/one] ; Incorrect: people can have many names!
[:person/home_address_line_one :db.type/string :db.cardinality/one]
[:person/home_address_line_two :db.type/string :db.cardinality/one]
[:person/home_city :db.type/string :db.cardinality/one]
[:person/home_phone :db.type/string :db.cardinality/one]
```
or in JSON as a simple object:
```json
{
"name": "Alice Smith",
"home_address_line_one": "123 Main St",
"home_city": "Anywhere",
"home_phone": "555-867-5309"
}
```
We might realize that this proliferation of attributes is going in the wrong direction, and add nested structure:
```json
{
"name": "Alice Smith",
"home_address": {
"line_one": "123 Main St",
"city": "Anywhere"
}}
```
(quick, is a home phone number a property of the address or the person?)
Or we might allow for some people having multiple addresses and multiple homes:
```json
{"name": "Alice Smith",
"addresses": [{
"type": "home",
"line_one": "123 Main St",
"city": "Anywhere"
}]}
```
There are [lots of reasons the address model is wrong](https://www.mjt.me.uk/posts/falsehoods-programmers-believe-about-addresses/), and [the same is true of names](https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/). But even the _structure_ of this is wrong, when you think about it.
A _physical place_, for our purposes, has an address. (It might have more than one.)
Each place might play a number of _roles_ to a number of people: the same house is the home of everyone who lives there, and the same business address is one of the work addresses for each employee. If I work from home, my work and business addresses are the same. It's not quite true to say that an address is a "home": an address _identifies_ a _place_, and that place _is a home to a person_.
But a typical contact application gets this wrong: the same _strings_ are duplicated (flattened and denormalized) into the independent contact records of each person. If a business moves location, or its building is renamed, we must change the addresses of multiple contacts.
A more correct model for this is _relational_:
```edn
[:person/name :db.type/string :db.cardinality/one]
[:person/lives_at :db.type/ref :db.cardinality/many] ; Points to a place.
[:person/works_at :db.type/ref :db.cardinality/many] ; Points to a place.
[:place/address :db.type/ref :db.cardinality/many] ; A place can have multiple addresses.
[:address/mailing_address :db.type/string :db.cardinality/one ; Each address can be represented once as a string.
:db.unique/identity]
[:address/city :db.type/string :db.cardinality/one] ; Perhaps this is useful?
```
Imagine that Alice works from home, and Bob works at his office on South Street. Alice's data looks like this:
```edn
[{:person/name "Alice Smith"
:person/lives_at "alice_home"
:person/works_at "alice_home"}
{:db/id "alice_home"
:place/address "main_street_123"}
{:db/id "main_street_123"
:address/mailing_address "123 Main St, Anywhere, WA 12345, USA"
:address/city "Anywhere"}]
```
and Bob's like this:
```edn
[{:person/name "Bob Salmon"
:person/works_at "bob_office"}
{:db/id "bob_office"
:place/owner "Example Holdings LLC"
:place/address "south_street_555"}
{:db/id "south_street_555"
:address/mailing_address "555 South St, Anywhere, WA 12345, USA"
:address/city "Anywhere"}]
```
Now if Alice (ID 1234) moves her business out of her house (1235) into an office in Bob's building (1236), we simply break one relationship and add a new one to a new place with the same address:
```edn
[[:db/retract 1234 :person/works_at 1235] ; Alice no longer works at home.
[:db/add 1234 :person/works_at "new_office"]
{:db/id "new_office"
:place/address [:address/mailing_address "555 South St, Anywhere, WA 12345, USA"]}]
```
If the building is now renamed to "The Office Factory", we can update its address in one step, affecting both Alice's and Bob's offices:
```edn
[[:db/retract 1236 :address/mailing_address "555 South St, Anywhere, WA 12345, USA"]
[:db/add 1236 :address/mailing_address "The Office Factory, South St, Anywhere, WA 12345, USA"]]
```
You can see here how changes are minimal and correspond to real changes in the domain — two properties that help with syncing. There is no duplication of strings.
We can find everyone who works at The Office Factory in a simple query without comparing strings across 'records':
```edn
[:find ?name
:where [?address :address/mailing_address "The Office Factory, South St, Anywhere, WA 12345, USA"]
[?office :place/address ?address]
[?person :person/works_at ?office]
[?person :person/name ?name]]
```
Let's say we later want to model move-in and move-out dates — useful for employment records and immigration paperwork!
Trying to add this to the JSON model is an exercise in frustration, because there is no stable way to identify people or places! (Go ahead, try it.)
To do it in Mentat simply requires defining a small bit of vocabulary:
```edn
[:place.change/person :db.type/ref :db.cardinality/many]
[:place.change/from :db.type/ref :db.cardinality/one] ; optional
[:place.change/to :db.type/ref :db.cardinality/one] ; optional
[:place.change/role :db.type/ref :db.cardinality/one] ; :person/lives_at or :person/works_at
[:place.change/on :db.type/instant :db.cardinality/one]
[:place.change/reason :db.type/string :db.cardinality/one] ; optional
```
so we can describe Alice's office move:
```edn
[{:place.change/person 1234
:place.change/from 1235
:place.change/to 1237
:place.change/role :person/works_at
:place.change/on #inst "2018-02-02T13:00:00Z"}]
```
or Jane's sale of her holiday home:
```edn
[{:place.change/person 2468
:place.change/reason "Sale"
:place.change/from 1235
:place.change/role :person/lives_at
:place.change/on #inst "2018-08-12T14:00:00Z"}]
```
Note that we don't need to repeat the addresses, we don't need to change the existing data, and we don't need to complicate matters for existing code.
Now we can find everyone who moved office in February:
```edn
[:find ?name
:where [?move :place.change/role :person/works_at]
[?move :place.change/on ?on]
[(>= ?on #inst "2018-02-01T00:00:00Z")]
[(< ?on #inst "2018-03-01T00:00:00Z")]
[?move :place.change/person ?person]
[?person :person/name ?name]]
```
## Tend towards recording observations, not changing state
These principles are all different aspects of normalization.
The introduction of fine-grained entities to represent data pushes us towards immutability: changes are increasingly changing an 'arrow' to point at one immutable entity or another, rather than re-describing a mutable entity.
In the previous example we introduced _places_ and _addresses_. Places and addresses themselves rarely change, allowing us to mostly isolate the churn in our data to the meaningful relationships between entities.
Another example of this approach is shown in modeling browser history.
Firefox's representation of history is, at its core, relatively simplistic; just two tables a little like this:
```sql
CREATE TABLE history (
id INTEGER PRIMARY KEY,
guid TEXT NOT NULL UNIQUE,
url TEXT NOT NULL UNIQUE,
title TEXT
);
CREATE TABLE visits (
id INTEGER PRIMARY KEY,
history_id INTEGER NOT NULL REFERENCES history(id),
type TINYINT,
timestamp INTEGER
);
```
Each time a URL is visited, an entry is added to the `visits` table and a row is added or updated in `history`. The title of the fetched page is used to update `history.title`, so that `history.title` always represents the most recently encountered title.
This works fine until more features are added.
### Forgetting
Browsers often have some capacity for deleting history. Sometimes this appears in the form of an explicit 'forget' operation — "Forget the last five minutes of browsing". Deleting visits in this way is fine: `DELETE FROM visits WHERE timestamp < ?`. But the mutability in the data model — title — trips us up. We're unable to roll back the title of the history entry.
### Syncing
But even if you are using Mentat or Datomic, and can turn to the log to reconstruct the old state, a mutable title on `history` will cause conflicts when syncing: one side's observed titles will 'lose' and be discarded in order to avoid a conflict. That's not right: those titles _were seen_. Unlike a conflicting counter or flag, these weren't abortive, temporary states; they were _observations of the world_, so there shouldn't be a winner and a loser.
### Containers
The true data model becomes apparent when we consider containers. Containers are a Firefox feature to sandbox the cookies, site data, and history of different named sub-profiles. You can have a container just for Facebook, or one for your banking; those Facebook cookies won't follow you around the web in your 'personal' container. You can simultaneously use separate Gmail accounts for work and personal email.
When Firefox added container support, it did so by annotating visits with a `container`:
```sql
CREATE TABLE visits (
id INTEGER PRIMARY KEY,
history_id INTEGER NOT NULL REFERENCES history(id),
type TINYINT,
timestamp INTEGER,
container INTEGER
);
```
This means that each container _competes for the title on `history`_. If you visit `facebook.com` in your usual logged-in container, the browser will run something like this SQL:
```
UPDATE TABLE history
SET title = '(2) Facebook'
WHERE url = 'https://www.facebook.com';
```
If you visit it in the wrong container by mistake, you'll get the Facebook login page, and Firefox will run:
```
UPDATE TABLE history
SET title = 'Facebook - Log In or Sign Up'
WHERE url = 'https://www.facebook.com';
```
Next time you open your history, _you'll see the login page title, even if you had a logged-in `facebook.com` session open in another tab_. There's no way to differentiate between the containers' views.
The correct data model for history is:
- Users visit a URL on a device in a container.
- Pages are fetched as a result of a visit (or dynamically after load). Pages can embed media and other resources.
- Pages, being HTML, have titles.
- Pages, titles, and visits are all _observations_, and as such cannot conflict.
- The _last observed_ title to show for a URL is an _aggregation_ of those events.
The entire notion of a history table — a concept centered on the URL — having a title is a subtly incorrect choice that causes problems with more modern browser features.
Modeled in Mentat:
```edn
[{:db/ident :visit/visitedOnDevice
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :visit/visitAt
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one}
{:db/ident :site/visit
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/many}
{:db/ident :site/url
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db/index true}
{:db/ident :visit/page
:db/valueType :db.type/ref
:db/isComponent true ; Debatable.
:db/cardinality :db.cardinality/one}
{:db/ident :page/title
:db/valueType :db.type/string
:db/fulltext true
:db/index true
:db/cardinality :db.cardinality/one}
{:db/ident :visit/container
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}]
```
Create some containers:
```edn
[{:db/ident :container/facebook}
{:db/ident :container/personal}]
```
Add a device:
```edn
[{:db/ident :device/my-desktop}]
```
Visit Facebook in each container:
```edn
[{:visit/visitedOnDevice :device/my-desktop
:visit/visitAt #inst "2018-04-06T18:46:00Z"
:visit/container :container/facebook
:db/id "fbvisit"
:visit/page "fbpage"}
{:db/id "fbpage"
:page/title "(2) Facebook"}
{:site/url "https://www.facebook.com"
:site/visit "fbvisit"}]
```
```edn
[{:visit/visitedOnDevice :device/my-desktop
:visit/visitAt #inst "2018-04-06T18:46:02Z"
:visit/container :container/personal
:db/id "personalvisit"
:visit/page "personalpage"}
{:db/id "personalpage"
:page/title "Facebook - Log In or Sign Up"}
{:site/url "https://www.facebook.com"
:site/visit "personalvisit"}]
```
Now we can show the title from the latest visit in a given container:
```edn
.q [:find (max ?visitDate) (the ?title)
:where [?site :site/url "https://www.facebook.com"]
[?site :site/visit ?visit]
[?visit :visit/container :container/facebook]
[?visit :visit/visitAt ?visitDate]
[?visit :visit/page ?page]
[?page :page/title ?title]]
=>
| (the ?title) | (max ?visitDate) |
--- ---
| "(2) Facebook" | 2018-04-06 18:46:00 UTC |
--- ---
.q [:find (the ?title) (max ?visitDate)
:where [?site :site/url "https://www.facebook.com"]
[?site :site/visit ?visit]
[?visit :visit/container :container/personal]
[?visit :visit/visitAt ?visitDate]
[?visit :visit/page ?page]
[?page :page/title ?title]]
=>
| (the ?title) | (max ?visitDate) |
--- ---
| "Facebook - Log In or Sign Up" | 2018-04-06 18:46:02 UTC |
--- ---
```
## Normalize; you can always denormalize for use.
To come.
## Use unique identities and cardinality-one attributes to make merging happen during a sync.
To come.
## Reify to handle conflict and atomicity.
To come.

122
docs/about.md Normal file
View file

@ -0,0 +1,122 @@
---
layout: page
title: About
permalink: /about/
---
# Project Mentat
Project Mentat is a persistent, embedded knowledge base. It draws heavily on [DataScript](https://github.com/tonsky/datascript) and [Datomic](http://datomic.com).
Mentat is implemented in Rust.
The first version of Project Mentat, named Datomish, [was written in ClojureScript](https://github.com/mozilla/mentat/tree/clojure), targeting both Node (on top of `promise_sqlite`) and Firefox (on top of `Sqlite.jsm`). It also worked in pure Clojure on the JVM on top of `jdbc-sqlite`. The name was changed to avoid confusion with [Datomic](http://datomic.com).
The Rust implementation gives us a smaller compiled output, better performance, more type safety, better tooling, and easier deployment into Firefox and mobile platforms.
---
## Motivation
Mentat is intended to be a flexible relational (not key-value, not document-oriented) store that makes it easy to describe, grow, and reuse your domain schema.
By abstracting away the storage schema, and by exposing change listeners outside the database (not via triggers), we hope to make domain schemas stable, and allow both the data store itself and embedding applications to use better architectures, meeting performance goals in a way that allows future evolution.
## Data storage is hard
We've observed that data storage is a particular area of difficulty for software development teams:
- It's hard to define storage schemas well. A developer must:
- Model their domain entities and relationships.
- Encode that model _efficiently_ and _correctly_ using the features available in the database.
- Plan for future extensions and performance tuning.
In a SQL database, the same schema definition defines everything from high-level domain relationships through to numeric field sizes in the same smear of keywords. It's difficult for someone unfamiliar with the domain to determine from such a schema what's a domain fact and what's an implementation concession — are all part numbers always 16 characters long, or are we trying to save space? — or, indeed, whether a missing constraint is deliberate or a bug.
The developer must think about foreign key constraints, compound uniqueness, and nullability. They must consider indexing, synchronizing, and stable identifiers. Most developers simply don't do enough work in SQL to get all of these things right. Storage thus becomes the specialty of a few individuals.
Which one of these is correct?
```edn
{:db/id :person/email
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many ; People can have multiple email addresses.
:db/unique :db.unique/identity ; For our purposes, each email identifies one person.
:db/index true} ; We want fast lookups by email.
{:db/id :person/friend
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many} ; People can have many friends.
```
```sql
CREATE TABLE people (
id INTEGER PRIMARY KEY, -- Bug: because of the primary key, each person can have no more than 1 email.
email VARCHAR(64), -- Bug?: no NOT NULL, so a person can have no email.
-- Bug: nobody will ever have a long email address, right?
);
CREATE TABLE friendships (
FOREIGN KEY person REFERENCES people(id), -- Bug?: no indexing, so lookups by friend or person will be slow.
FOREIGN KEY friend REFERENCES people(id), -- Bug: no compound uniqueness constraint, so we can have dupe friendships.
);
```
They both have limitations — the Mentat schema allows only for an open world (it's possible to declare friendships with people whose email isn't known), and requires validation code to enforce email string correctness — but we think that even such a tiny SQL example is harder to understand and obscures important domain decisions.
- Queries are intimately tied to structural storage choices. That not only hides the declarative domain-level meaning of the query — it's hard to tell what a query is trying to do when it's a 100-line mess of subqueries and `LEFT OUTER JOIN`s — but it also means a simple structural schema change requires auditing _every query_ for correctness.
- Developers often capture less event-shaped than they perhaps should, simply because their initial requirements don't warrant it. It's quite common to later want to [know when a fact was recorded](https://bugzilla.mozilla.org/show_bug.cgi?id=1341939), or _in which order_ two facts were recorded (particularly for migrations), or on which device an event took place… or even that a fact was _ever_ recorded and then deleted.
- Common queries are hard. Storing values only once, upserts, complicated joins, and group-wise maxima are all difficult for non-expert developers to get right.
- It's hard to evolve storage schemas. Writing a robust SQL schema migration is hard, particularly if a bad migration has ever escaped into the wild! Teams learn to fear and avoid schema changes, and eventually they ship a table called `metadata`, with three `TEXT` columns, so they never have to write a migration again. That decision pushes storage complexity into application code. (Or they start storing unversioned JSON blobs in the database…)
- It's hard to share storage with another component, let alone share _data_ with another component. Conway's Law applies: your software system will often grow to have one database per team.
- It's hard to build efficient storage and querying architectures. Materialized views require knowledge of triggers, or the implementation of bottleneck APIs. _Ad hoc_ caches are often wrong, are almost never formally designed (do you want a write-back, write-through, or write-around cache? Do you know the difference?), and often aren't reusable. The average developer, faced with a SQL database, has little choice but to build a simple table that tries to meet every need.
## Comparison to DataScript
DataScript asks the question: "What if creating a database were as cheap as creating a Hashmap?"
Mentat is not interested in that. Instead, it's strongly interested in persistence and performance, with very little interest in immutable databases/databases as values or throwaway use.
One might say that Mentat's question is: "What if an SQLite database could store arbitrary relations, for arbitrary consumers, without them having to coordinate an up-front storage-level schema?"
(Note that [domain-level schemas are very valuable](http://martinfowler.com/articles/schemaless/).)
Another possible question would be: "What if we could bake some of the concepts of [CQRS and event sourcing](http://www.baeldung.com/cqrs-event-sourced-architecture-resources) into a persistent relational store, such that the transaction log itself were of value to queries?"
Some thought has been given to how databases as values — long-term references to a snapshot of the store at an instant in time — could work in this model. It's not impossible; it simply has different performance characteristics.
Just like DataScript, Mentat speaks Datalog for querying and takes additions and retractions as input to a transaction.
Unlike DataScript, Mentat exposes free-text indexing, thanks to SQLite.
## Comparison to Datomic
Datomic is a server-side, enterprise-grade data storage system. Datomic has a beautiful conceptual model. It's intended to be backed by a storage cluster, in which it keeps index chunks forever. Index chunks are replicated to peers, allowing it to run queries at the edges. Writes are serialized through a transactor.
Many of these design decisions are inapplicable to deployed desktop software; indeed, the use of multiple JVM processes makes Datomic's use in a small desktop app, or a mobile device, prohibitive.
Mentat was designed for embedding, initially in an experimental Electron app ([Tofino](https://github.com/mozilla/tofino)). It is less concerned with exposing consistent database states outside transaction boundaries, because that's less important here, and dropping some of these requirements allows us to leverage SQLite itself.
## Comparison to SQLite
SQLite is a traditional SQL database in most respects: schemas conflate semantic, structural, and datatype concerns, as described above; the main interface with the database is human-first textual queries; sparse and graph-structured data are 'unnatural', if not always inefficient; experimenting with and evolving data models are error-prone and complicated activities; and so on.
Mentat aims to offer many of the advantages of SQLite — single-file use, embeddability, and good performance — while building a more relaxed, reusable, and expressive data model on top.
---
## Contributing
Please note that this project is released with a Contributor Code of Conduct.
By participating in this project you agree to abide by its terms.
See [CONTRIBUTING.md](/CONTRIBUTING/) for further notes.
This project is very new, so we'll probably revise these guidelines. Please
comment on an issue before putting significant effort in if you'd like to
contribute.

View file

@ -0,0 +1,60 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>All Classes</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
<script type="text/javascript" src="script.js"></script>
</head>
<body>
<h1 class="bar">All&nbsp;Classes</h1>
<div class="indexContainer">
<ul>
<li><a href="org/mozilla/mentat/CacheDirection.html" title="enum in org.mozilla.mentat" target="classFrame">CacheDirection</a></li>
<li><a href="org/mozilla/mentat/CollResult.html" title="class in org.mozilla.mentat" target="classFrame">CollResult</a></li>
<li><a href="org/mozilla/mentat/CollResultHandler.html" title="interface in org.mozilla.mentat" target="classFrame"><span class="interfaceName">CollResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/ColResultIterator.html" title="class in org.mozilla.mentat" target="classFrame">ColResultIterator</a></li>
<li><a href="org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat" target="classFrame">EntityBuilder</a></li>
<li><a href="org/mozilla/mentat/InProgress.html" title="class in org.mozilla.mentat" target="classFrame">InProgress</a></li>
<li><a href="org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat" target="classFrame">InProgressBuilder</a></li>
<li><a href="org/mozilla/mentat/InProgressTransactionResult.html" title="class in org.mozilla.mentat" target="classFrame">InProgressTransactionResult</a></li>
<li><a href="org/mozilla/mentat/InProgressTransactionResult.ByReference.html" title="class in org.mozilla.mentat" target="classFrame">InProgressTransactionResult.ByReference</a></li>
<li><a href="org/mozilla/mentat/InProgressTransactionResult.ByValue.html" title="class in org.mozilla.mentat" target="classFrame">InProgressTransactionResult.ByValue</a></li>
<li><a href="org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat" target="classFrame"><span class="interfaceName">JNA</span></a></li>
<li><a href="org/mozilla/mentat/JNA.EntityBuilder.html" title="class in org.mozilla.mentat" target="classFrame">JNA.EntityBuilder</a></li>
<li><a href="org/mozilla/mentat/JNA.InProgress.html" title="class in org.mozilla.mentat" target="classFrame">JNA.InProgress</a></li>
<li><a href="org/mozilla/mentat/JNA.InProgressBuilder.html" title="class in org.mozilla.mentat" target="classFrame">JNA.InProgressBuilder</a></li>
<li><a href="org/mozilla/mentat/JNA.QueryBuilder.html" title="class in org.mozilla.mentat" target="classFrame">JNA.QueryBuilder</a></li>
<li><a href="org/mozilla/mentat/JNA.RelResult.html" title="class in org.mozilla.mentat" target="classFrame">JNA.RelResult</a></li>
<li><a href="org/mozilla/mentat/JNA.RelResultIter.html" title="class in org.mozilla.mentat" target="classFrame">JNA.RelResultIter</a></li>
<li><a href="org/mozilla/mentat/JNA.Store.html" title="class in org.mozilla.mentat" target="classFrame">JNA.Store</a></li>
<li><a href="org/mozilla/mentat/JNA.TxReport.html" title="class in org.mozilla.mentat" target="classFrame">JNA.TxReport</a></li>
<li><a href="org/mozilla/mentat/JNA.TypedValue.html" title="class in org.mozilla.mentat" target="classFrame">JNA.TypedValue</a></li>
<li><a href="org/mozilla/mentat/JNA.TypedValueList.html" title="class in org.mozilla.mentat" target="classFrame">JNA.TypedValueList</a></li>
<li><a href="org/mozilla/mentat/JNA.TypedValueListIter.html" title="class in org.mozilla.mentat" target="classFrame">JNA.TypedValueListIter</a></li>
<li><a href="org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat" target="classFrame">Mentat</a></li>
<li><a href="org/mozilla/mentat/Query.html" title="class in org.mozilla.mentat" target="classFrame">Query</a></li>
<li><a href="org/mozilla/mentat/RelResult.html" title="class in org.mozilla.mentat" target="classFrame">RelResult</a></li>
<li><a href="org/mozilla/mentat/RelResultHandler.html" title="interface in org.mozilla.mentat" target="classFrame"><span class="interfaceName">RelResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/RelResultIterator.html" title="class in org.mozilla.mentat" target="classFrame">RelResultIterator</a></li>
<li><a href="org/mozilla/mentat/RustError.html" title="class in org.mozilla.mentat" target="classFrame">RustError</a></li>
<li><a href="org/mozilla/mentat/RustError.ByReference.html" title="class in org.mozilla.mentat" target="classFrame">RustError.ByReference</a></li>
<li><a href="org/mozilla/mentat/RustError.ByValue.html" title="class in org.mozilla.mentat" target="classFrame">RustError.ByValue</a></li>
<li><a href="org/mozilla/mentat/ScalarResultHandler.html" title="interface in org.mozilla.mentat" target="classFrame"><span class="interfaceName">ScalarResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat" target="classFrame">TupleResult</a></li>
<li><a href="org/mozilla/mentat/TupleResultHandler.html" title="interface in org.mozilla.mentat" target="classFrame"><span class="interfaceName">TupleResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/TxChange.html" title="class in org.mozilla.mentat" target="classFrame">TxChange</a></li>
<li><a href="org/mozilla/mentat/TxChange.ByReference.html" title="class in org.mozilla.mentat" target="classFrame">TxChange.ByReference</a></li>
<li><a href="org/mozilla/mentat/TxChange.ByValue.html" title="class in org.mozilla.mentat" target="classFrame">TxChange.ByValue</a></li>
<li><a href="org/mozilla/mentat/TxChangeList.html" title="class in org.mozilla.mentat" target="classFrame">TxChangeList</a></li>
<li><a href="org/mozilla/mentat/TxChangeList.ByReference.html" title="class in org.mozilla.mentat" target="classFrame">TxChangeList.ByReference</a></li>
<li><a href="org/mozilla/mentat/TxChangeList.ByValue.html" title="class in org.mozilla.mentat" target="classFrame">TxChangeList.ByValue</a></li>
<li><a href="org/mozilla/mentat/TxObserverCallback.html" title="interface in org.mozilla.mentat" target="classFrame"><span class="interfaceName">TxObserverCallback</span></a></li>
<li><a href="org/mozilla/mentat/TxReport.html" title="class in org.mozilla.mentat" target="classFrame">TxReport</a></li>
<li><a href="org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat" target="classFrame">TypedValue</a></li>
</ul>
</div>
</body>
</html>

View file

@ -0,0 +1,60 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>All Classes</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
<script type="text/javascript" src="script.js"></script>
</head>
<body>
<h1 class="bar">All&nbsp;Classes</h1>
<div class="indexContainer">
<ul>
<li><a href="org/mozilla/mentat/CacheDirection.html" title="enum in org.mozilla.mentat">CacheDirection</a></li>
<li><a href="org/mozilla/mentat/CollResult.html" title="class in org.mozilla.mentat">CollResult</a></li>
<li><a href="org/mozilla/mentat/CollResultHandler.html" title="interface in org.mozilla.mentat"><span class="interfaceName">CollResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/ColResultIterator.html" title="class in org.mozilla.mentat">ColResultIterator</a></li>
<li><a href="org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></li>
<li><a href="org/mozilla/mentat/InProgress.html" title="class in org.mozilla.mentat">InProgress</a></li>
<li><a href="org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></li>
<li><a href="org/mozilla/mentat/InProgressTransactionResult.html" title="class in org.mozilla.mentat">InProgressTransactionResult</a></li>
<li><a href="org/mozilla/mentat/InProgressTransactionResult.ByReference.html" title="class in org.mozilla.mentat">InProgressTransactionResult.ByReference</a></li>
<li><a href="org/mozilla/mentat/InProgressTransactionResult.ByValue.html" title="class in org.mozilla.mentat">InProgressTransactionResult.ByValue</a></li>
<li><a href="org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat"><span class="interfaceName">JNA</span></a></li>
<li><a href="org/mozilla/mentat/JNA.EntityBuilder.html" title="class in org.mozilla.mentat">JNA.EntityBuilder</a></li>
<li><a href="org/mozilla/mentat/JNA.InProgress.html" title="class in org.mozilla.mentat">JNA.InProgress</a></li>
<li><a href="org/mozilla/mentat/JNA.InProgressBuilder.html" title="class in org.mozilla.mentat">JNA.InProgressBuilder</a></li>
<li><a href="org/mozilla/mentat/JNA.QueryBuilder.html" title="class in org.mozilla.mentat">JNA.QueryBuilder</a></li>
<li><a href="org/mozilla/mentat/JNA.RelResult.html" title="class in org.mozilla.mentat">JNA.RelResult</a></li>
<li><a href="org/mozilla/mentat/JNA.RelResultIter.html" title="class in org.mozilla.mentat">JNA.RelResultIter</a></li>
<li><a href="org/mozilla/mentat/JNA.Store.html" title="class in org.mozilla.mentat">JNA.Store</a></li>
<li><a href="org/mozilla/mentat/JNA.TxReport.html" title="class in org.mozilla.mentat">JNA.TxReport</a></li>
<li><a href="org/mozilla/mentat/JNA.TypedValue.html" title="class in org.mozilla.mentat">JNA.TypedValue</a></li>
<li><a href="org/mozilla/mentat/JNA.TypedValueList.html" title="class in org.mozilla.mentat">JNA.TypedValueList</a></li>
<li><a href="org/mozilla/mentat/JNA.TypedValueListIter.html" title="class in org.mozilla.mentat">JNA.TypedValueListIter</a></li>
<li><a href="org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat">Mentat</a></li>
<li><a href="org/mozilla/mentat/Query.html" title="class in org.mozilla.mentat">Query</a></li>
<li><a href="org/mozilla/mentat/RelResult.html" title="class in org.mozilla.mentat">RelResult</a></li>
<li><a href="org/mozilla/mentat/RelResultHandler.html" title="interface in org.mozilla.mentat"><span class="interfaceName">RelResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/RelResultIterator.html" title="class in org.mozilla.mentat">RelResultIterator</a></li>
<li><a href="org/mozilla/mentat/RustError.html" title="class in org.mozilla.mentat">RustError</a></li>
<li><a href="org/mozilla/mentat/RustError.ByReference.html" title="class in org.mozilla.mentat">RustError.ByReference</a></li>
<li><a href="org/mozilla/mentat/RustError.ByValue.html" title="class in org.mozilla.mentat">RustError.ByValue</a></li>
<li><a href="org/mozilla/mentat/ScalarResultHandler.html" title="interface in org.mozilla.mentat"><span class="interfaceName">ScalarResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></li>
<li><a href="org/mozilla/mentat/TupleResultHandler.html" title="interface in org.mozilla.mentat"><span class="interfaceName">TupleResultHandler</span></a></li>
<li><a href="org/mozilla/mentat/TxChange.html" title="class in org.mozilla.mentat">TxChange</a></li>
<li><a href="org/mozilla/mentat/TxChange.ByReference.html" title="class in org.mozilla.mentat">TxChange.ByReference</a></li>
<li><a href="org/mozilla/mentat/TxChange.ByValue.html" title="class in org.mozilla.mentat">TxChange.ByValue</a></li>
<li><a href="org/mozilla/mentat/TxChangeList.html" title="class in org.mozilla.mentat">TxChangeList</a></li>
<li><a href="org/mozilla/mentat/TxChangeList.ByReference.html" title="class in org.mozilla.mentat">TxChangeList.ByReference</a></li>
<li><a href="org/mozilla/mentat/TxChangeList.ByValue.html" title="class in org.mozilla.mentat">TxChangeList.ByValue</a></li>
<li><a href="org/mozilla/mentat/TxObserverCallback.html" title="interface in org.mozilla.mentat"><span class="interfaceName">TxObserverCallback</span></a></li>
<li><a href="org/mozilla/mentat/TxReport.html" title="class in org.mozilla.mentat">TxReport</a></li>
<li><a href="org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></li>
</ul>
</div>
</body>
</html>

View file

@ -0,0 +1,149 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>Constant Field Values</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
<script type="text/javascript" src="script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="Constant Field Values";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="overview-tree.html">Tree</a></li>
<li><a href="deprecated-list.html">Deprecated</a></li>
<li><a href="index-files/index-1.html">Index</a></li>
<li><a href="help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="index.html?constant-values.html" target="_top">Frames</a></li>
<li><a href="constant-values.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="header">
<h1 title="Constant Field Values" class="title">Constant Field Values</h1>
<h2 title="Contents">Contents</h2>
<ul>
<li><a href="#org.mozilla">org.mozilla.*</a></li>
</ul>
</div>
<div class="constantValuesContainer"><a name="org.mozilla">
<!-- -->
</a>
<h2 title="org.mozilla">org.mozilla.*</h2>
<ul class="blockList">
<li class="blockList">
<table class="constantsSummary" border="0" cellpadding="3" cellspacing="0" summary="Constant Field Values table, listing constant fields, and values">
<caption><span>org.mozilla.mentat.<a href="org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></span><span class="tabEnd">&nbsp;</span></caption>
<tr>
<th class="colFirst" scope="col">Modifier and Type</th>
<th scope="col">Constant Field</th>
<th class="colLast" scope="col">Value</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colFirst"><a name="org.mozilla.mentat.JNA.JNA_LIBRARY_NAME">
<!-- -->
</a><code>public&nbsp;static&nbsp;final&nbsp;<a href="https://developer.android.com/reference/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</a></code></td>
<td><code><a href="org/mozilla/mentat/JNA.html#JNA_LIBRARY_NAME">JNA_LIBRARY_NAME</a></code></td>
<td class="colLast"><code>"mentat_ffi"</code></td>
</tr>
</tbody>
</table>
</li>
</ul>
</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="overview-tree.html">Tree</a></li>
<li><a href="deprecated-list.html">Deprecated</a></li>
<li><a href="index-files/index-1.html">Index</a></li>
<li><a href="help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="index.html?constant-values.html" target="_top">Frames</a></li>
<li><a href="constant-values.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,120 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>Deprecated List</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
<script type="text/javascript" src="script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="Deprecated List";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="overview-tree.html">Tree</a></li>
<li class="navBarCell1Rev">Deprecated</li>
<li><a href="index-files/index-1.html">Index</a></li>
<li><a href="help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="index.html?deprecated-list.html" target="_top">Frames</a></li>
<li><a href="deprecated-list.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="header">
<h1 title="Deprecated API" class="title">Deprecated API</h1>
<h2 title="Contents">Contents</h2>
</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="overview-tree.html">Tree</a></li>
<li class="navBarCell1Rev">Deprecated</li>
<li><a href="index-files/index-1.html">Index</a></li>
<li><a href="help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="index.html?deprecated-list.html" target="_top">Frames</a></li>
<li><a href="deprecated-list.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,217 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>API Help</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
<script type="text/javascript" src="script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="API Help";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="overview-tree.html">Tree</a></li>
<li><a href="deprecated-list.html">Deprecated</a></li>
<li><a href="index-files/index-1.html">Index</a></li>
<li class="navBarCell1Rev">Help</li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="index.html?help-doc.html" target="_top">Frames</a></li>
<li><a href="help-doc.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="header">
<h1 class="title">How This API Document Is Organized</h1>
<div class="subTitle">This API (Application Programming Interface) document has pages corresponding to the items in the navigation bar, described as follows.</div>
</div>
<div class="contentContainer">
<ul class="blockList">
<li class="blockList">
<h2>Package</h2>
<p>Each package has a page that contains a list of its classes and interfaces, with a summary for each. This page can contain six categories:</p>
<ul>
<li>Interfaces (italic)</li>
<li>Classes</li>
<li>Enums</li>
<li>Exceptions</li>
<li>Errors</li>
<li>Annotation Types</li>
</ul>
</li>
<li class="blockList">
<h2>Class/Interface</h2>
<p>Each class, interface, nested class and nested interface has its own separate page. Each of these pages has three sections consisting of a class/interface description, summary tables, and detailed member descriptions:</p>
<ul>
<li>Class inheritance diagram</li>
<li>Direct Subclasses</li>
<li>All Known Subinterfaces</li>
<li>All Known Implementing Classes</li>
<li>Class/interface declaration</li>
<li>Class/interface description</li>
</ul>
<ul>
<li>Nested Class Summary</li>
<li>Field Summary</li>
<li>Constructor Summary</li>
<li>Method Summary</li>
</ul>
<ul>
<li>Field Detail</li>
<li>Constructor Detail</li>
<li>Method Detail</li>
</ul>
<p>Each summary entry contains the first sentence from the detailed description for that item. The summary entries are alphabetical, while the detailed descriptions are in the order they appear in the source code. This preserves the logical groupings established by the programmer.</p>
</li>
<li class="blockList">
<h2>Annotation Type</h2>
<p>Each annotation type has its own separate page with the following sections:</p>
<ul>
<li>Annotation Type declaration</li>
<li>Annotation Type description</li>
<li>Required Element Summary</li>
<li>Optional Element Summary</li>
<li>Element Detail</li>
</ul>
</li>
<li class="blockList">
<h2>Enum</h2>
<p>Each enum has its own separate page with the following sections:</p>
<ul>
<li>Enum declaration</li>
<li>Enum description</li>
<li>Enum Constant Summary</li>
<li>Enum Constant Detail</li>
</ul>
</li>
<li class="blockList">
<h2>Tree (Class Hierarchy)</h2>
<p>There is a <a href="overview-tree.html">Class Hierarchy</a> page for all packages, plus a hierarchy for each package. Each hierarchy page contains a list of classes and a list of interfaces. The classes are organized by inheritance structure starting with <code>java.lang.Object</code>. The interfaces do not inherit from <code>java.lang.Object</code>.</p>
<ul>
<li>When viewing the Overview page, clicking on "Tree" displays the hierarchy for all packages.</li>
<li>When viewing a particular package, class or interface page, clicking "Tree" displays the hierarchy for only that package.</li>
</ul>
</li>
<li class="blockList">
<h2>Deprecated API</h2>
<p>The <a href="deprecated-list.html">Deprecated API</a> page lists all of the API that have been deprecated. A deprecated API is not recommended for use, generally due to improvements, and a replacement API is usually given. Deprecated APIs may be removed in future implementations.</p>
</li>
<li class="blockList">
<h2>Index</h2>
<p>The <a href="index-files/index-1.html">Index</a> contains an alphabetic list of all classes, interfaces, constructors, methods, and fields.</p>
</li>
<li class="blockList">
<h2>Prev/Next</h2>
<p>These links take you to the next or previous class, interface, package, or related page.</p>
</li>
<li class="blockList">
<h2>Frames/No Frames</h2>
<p>These links show and hide the HTML frames. All pages are available with or without frames.</p>
</li>
<li class="blockList">
<h2>All Classes</h2>
<p>The <a href="allclasses-noframe.html">All Classes</a> link shows all classes and interfaces except non-static nested types.</p>
</li>
<li class="blockList">
<h2>Serialized Form</h2>
<p>Each serializable or externalizable class has a description of its serialization fields and methods. This information is of interest to re-implementors, not to developers using the API. While there is no link in the navigation bar, you can get to this information by going to any serialized class and clicking "Serialized Form" in the "See also" section of the class description.</p>
</li>
<li class="blockList">
<h2>Constant Field Values</h2>
<p>The <a href="constant-values.html">Constant Field Values</a> page lists the static final fields and their values.</p>
</li>
</ul>
<span class="emphasizedPhrase">This help file applies to API documentation generated using the standard doclet.</span></div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="overview-tree.html">Tree</a></li>
<li><a href="deprecated-list.html">Deprecated</a></li>
<li><a href="index-files/index-1.html">Index</a></li>
<li class="navBarCell1Rev">Help</li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="index.html?help-doc.html" target="_top">Frames</a></li>
<li><a href="help-doc.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,255 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>A-Index</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style">
<script type="text/javascript" src="../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="A-Index";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev Letter</li>
<li><a href="index-2.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-1.html" target="_top">Frames</a></li>
<li><a href="index-1.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;<a name="I:A">
<!-- -->
</a>
<h2 class="title">A</h2>
<dl>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#add-java.lang.String-long-">add(String, long)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#add-java.lang.String-boolean-">add(String, boolean)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#add-java.lang.String-double-">add(String, double)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#add-java.lang.String-java.util.Date-">add(String, Date)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#add-java.lang.String-java.lang.String-">add(String, String)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#add-java.lang.String-java.util.UUID-">add(String, UUID)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#add-long-java.lang.String-long-">add(long, String, long)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#add-long-java.lang.String-boolean-">add(long, String, boolean)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#add-long-java.lang.String-double-">add(long, String, double)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#add-long-java.lang.String-java.util.Date-">add(long, String, Date)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#add-long-java.lang.String-java.lang.String-">add(long, String, String)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#add-long-java.lang.String-java.util.UUID-">add(long, String, UUID)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#addKeyword-java.lang.String-java.lang.String-">addKeyword(String, String)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#addKeyword-long-java.lang.String-java.lang.String-">addKeyword(long, String, String)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/EntityBuilder.html#addRef-java.lang.String-long-">addRef(String, long)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/EntityBuilder.html" title="class in org.mozilla.mentat">EntityBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/InProgressBuilder.html#addRef-long-java.lang.String-long-">addRef(long, String, long)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/InProgressBuilder.html" title="class in org.mozilla.mentat">InProgressBuilder</a></dt>
<dd>
<div class="block">Asserts the value of attribute `keyword` to be the provided `value`.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/ColResultIterator.html#advanceIterator--">advanceIterator()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/ColResultIterator.html" title="class in org.mozilla.mentat">ColResultIterator</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/RelResultIterator.html#advanceIterator--">advanceIterator()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/RelResultIterator.html" title="class in org.mozilla.mentat">RelResultIterator</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asBool-java.lang.Integer-">asBool(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the <a href="https://developer.android.com/reference/java/lang/Boolean.html?is-external=true" title="class or interface in java.lang"><code>Boolean</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asBoolean--">asBoolean()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a <a href="https://developer.android.com/reference/java/lang/Boolean.html?is-external=true" title="class or interface in java.lang"><code>Boolean</code></a>.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asDate-java.lang.Integer-">asDate(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the <a href="https://developer.android.com/reference/java/util/Date.html?is-external=true" title="class or interface in java.util"><code>Date</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asDate--">asDate()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a <a href="https://developer.android.com/reference/java/util/Date.html?is-external=true" title="class or interface in java.util"><code>Date</code></a>.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asDouble-java.lang.Integer-">asDouble(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the <a href="https://developer.android.com/reference/java/lang/Double.html?is-external=true" title="class or interface in java.lang"><code>Double</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asDouble--">asDouble()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a <a href="https://developer.android.com/reference/java/lang/Double.html?is-external=true" title="class or interface in java.lang"><code>Double</code></a>.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asEntid-java.lang.Integer-">asEntid(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the Entid at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asEntid--">asEntid()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a Entid.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asKeyword-java.lang.Integer-">asKeyword(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the keyword <a href="https://developer.android.com/reference/java/lang/String.html?is-external=true" title="class or interface in java.lang"><code>String</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asKeyword--">asKeyword()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a keyword <a href="https://developer.android.com/reference/java/lang/String.html?is-external=true" title="class or interface in java.lang"><code>String</code></a>.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asLong-java.lang.Integer-">asLong(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the <a href="https://developer.android.com/reference/java/lang/Long.html?is-external=true" title="class or interface in java.lang"><code>Long</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asLong--">asLong()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a <a href="https://developer.android.com/reference/java/lang/Long.html?is-external=true" title="class or interface in java.lang"><code>Long</code></a>.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asString-java.lang.Integer-">asString(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the <a href="https://developer.android.com/reference/java/lang/String.html?is-external=true" title="class or interface in java.lang"><code>String</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asString--">asString()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a <a href="https://developer.android.com/reference/java/lang/String.html?is-external=true" title="class or interface in java.lang"><code>String</code></a>.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TupleResult.html#asUUID-java.lang.Integer-">asUUID(Integer)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TupleResult.html" title="class in org.mozilla.mentat">TupleResult</a></dt>
<dd>
<div class="block">Return the <a href="https://developer.android.com/reference/java/util/UUID.html?is-external=true" title="class or interface in java.util"><code>UUID</code></a> at the specified index.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TypedValue.html#asUUID--">asUUID()</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TypedValue.html" title="class in org.mozilla.mentat">TypedValue</a></dt>
<dd>
<div class="block">This value as a <a href="https://developer.android.com/reference/java/util/UUID.html?is-external=true" title="class or interface in java.util"><code>UUID</code></a>.</div>
</dd>
</dl>
<a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev Letter</li>
<li><a href="index-2.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-1.html" target="_top">Frames</a></li>
<li><a href="index-1.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,154 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>J-Index</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style">
<script type="text/javascript" src="../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="J-Index";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-9.html">Prev Letter</a></li>
<li><a href="index-11.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-10.html" target="_top">Frames</a></li>
<li><a href="index-10.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;<a name="I:J">
<!-- -->
</a>
<h2 class="title">J</h2>
<dl>
<dt><a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat"><span class="typeNameLink">JNA</span></a> - Interface in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>
<div class="block">JNA interface for FFI to Mentat's Rust library
Each function definition here link directly to a function in Mentat's FFI crate.</div>
</dd>
<dt><a href="../org/mozilla/mentat/JNA.EntityBuilder.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.EntityBuilder</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.InProgress.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.InProgress</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.InProgressBuilder.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.InProgressBuilder</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.QueryBuilder.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.QueryBuilder</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.RelResult.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.RelResult</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.RelResultIter.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.RelResultIter</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.Store.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.Store</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.TxReport.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.TxReport</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.TypedValue.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.TypedValue</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.TypedValueList.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.TypedValueList</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><a href="../org/mozilla/mentat/JNA.TypedValueListIter.html" title="class in org.mozilla.mentat"><span class="typeNameLink">JNA.TypedValueListIter</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#JNA_LIBRARY_NAME">JNA_LIBRARY_NAME</a></span> - Static variable in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#JNA_NATIVE_LIB">JNA_NATIVE_LIB</a></span> - Static variable in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
</dl>
<a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-9.html">Prev Letter</a></li>
<li><a href="index-11.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-10.html" target="_top">Frames</a></li>
<li><a href="index-10.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,125 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>L-Index</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style">
<script type="text/javascript" src="../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="L-Index";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-10.html">Prev Letter</a></li>
<li><a href="index-12.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-11.html" target="_top">Frames</a></li>
<li><a href="index-11.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;<a name="I:L">
<!-- -->
</a>
<h2 class="title">L</h2>
<dl>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/TxChangeList.html#len">len</a></span> - Variable in class org.mozilla.mentat.<a href="../org/mozilla/mentat/TxChangeList.html" title="class in org.mozilla.mentat">TxChangeList</a></dt>
<dd>&nbsp;</dd>
</dl>
<a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-10.html">Prev Letter</a></li>
<li><a href="index-12.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-11.html" target="_top">Frames</a></li>
<li><a href="index-11.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,144 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>M-Index</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style">
<script type="text/javascript" src="../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="M-Index";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-11.html">Prev Letter</a></li>
<li><a href="index-13.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-12.html" target="_top">Frames</a></li>
<li><a href="index-12.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;<a name="I:M">
<!-- -->
</a>
<h2 class="title">M</h2>
<dl>
<dt><a href="../org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat"><span class="typeNameLink">Mentat</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>
<div class="block">The primary class for accessing Mentat's API.<br/>
This class provides all of the basic API that can be found in Mentat's Store struct.<br/>
The raw pointer it holds is a pointer to a Store.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/Mentat.html#Mentat-java.lang.String-">Mentat(String)</a></span> - Constructor for class org.mozilla.mentat.<a href="../org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat">Mentat</a></dt>
<dd>
<div class="block">Open a connection to a Store in a given location.<br/>
If the store does not already exist, one will be created.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/Mentat.html#Mentat--">Mentat()</a></span> - Constructor for class org.mozilla.mentat.<a href="../org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat">Mentat</a></dt>
<dd>
<div class="block">Open a connection to an in-memory Store.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/Mentat.html#Mentat-org.mozilla.mentat.JNA.Store-">Mentat(JNA.Store)</a></span> - Constructor for class org.mozilla.mentat.<a href="../org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat">Mentat</a></dt>
<dd>
<div class="block">Create a new Mentat with the provided pointer to a Mentat Store</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/RustError.html#message">message</a></span> - Variable in class org.mozilla.mentat.<a href="../org/mozilla/mentat/RustError.html" title="class in org.mozilla.mentat">RustError</a></dt>
<dd>&nbsp;</dd>
</dl>
<a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-11.html">Prev Letter</a></li>
<li><a href="index-13.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-12.html" target="_top">Frames</a></li>
<li><a href="index-12.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,125 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>O-Index</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style">
<script type="text/javascript" src="../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="O-Index";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-12.html">Prev Letter</a></li>
<li><a href="index-14.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-13.html" target="_top">Frames</a></li>
<li><a href="index-13.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;<a name="I:O">
<!-- -->
</a>
<h2 class="title">O</h2>
<dl>
<dt><a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a> - package org.mozilla.mentat</dt>
<dd>&nbsp;</dd>
</dl>
<a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-12.html">Prev Letter</a></li>
<li><a href="index-14.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-13.html" target="_top">Frames</a></li>
<li><a href="index-13.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

View file

@ -0,0 +1,163 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_152-release) on Thu Jun 28 11:01:15 BST 2018 -->
<title>Q-Index</title>
<meta name="date" content="2018-06-28">
<link rel="stylesheet" type="text/css" href="../stylesheet.css" title="Style">
<script type="text/javascript" src="../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="Q-Index";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-13.html">Prev Letter</a></li>
<li><a href="index-15.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-14.html" target="_top">Frames</a></li>
<li><a href="index-14.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="contentContainer"><a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;<a name="I:Q">
<!-- -->
</a>
<h2 class="title">Q</h2>
<dl>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/Mentat.html#query-java.lang.String-">query(String)</a></span> - Method in class org.mozilla.mentat.<a href="../org/mozilla/mentat/Mentat.html" title="class in org.mozilla.mentat">Mentat</a></dt>
<dd>
<div class="block">Start a query.</div>
</dd>
<dt><a href="../org/mozilla/mentat/Query.html" title="class in org.mozilla.mentat"><span class="typeNameLink">Query</span></a> - Class in <a href="../org/mozilla/mentat/package-summary.html">org.mozilla.mentat</a></dt>
<dd>
<div class="block">This class allows you to construct a query, bind values to variables and run those queries against a mentat DB.</div>
</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/Query.html#Query-org.mozilla.mentat.JNA.QueryBuilder-">Query(JNA.QueryBuilder)</a></span> - Constructor for class org.mozilla.mentat.<a href="../org/mozilla/mentat/Query.html" title="class in org.mozilla.mentat">Query</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_boolean-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-int-">query_builder_bind_boolean(JNA.QueryBuilder, String, int)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_double-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-double-">query_builder_bind_double(JNA.QueryBuilder, String, double)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_kw-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-java.lang.String-">query_builder_bind_kw(JNA.QueryBuilder, String, String)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_long-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-long-">query_builder_bind_long(JNA.QueryBuilder, String, long)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_ref-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-long-">query_builder_bind_ref(JNA.QueryBuilder, String, long)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_ref_kw-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-java.lang.String-">query_builder_bind_ref_kw(JNA.QueryBuilder, String, String)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_string-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-java.lang.String-">query_builder_bind_string(JNA.QueryBuilder, String, String)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_timestamp-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-long-">query_builder_bind_timestamp(JNA.QueryBuilder, String, long)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_bind_uuid-org.mozilla.mentat.JNA.QueryBuilder-java.lang.String-com.sun.jna.Pointer-">query_builder_bind_uuid(JNA.QueryBuilder, String, Pointer)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_destroy-org.mozilla.mentat.JNA.QueryBuilder-">query_builder_destroy(JNA.QueryBuilder)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_execute-org.mozilla.mentat.JNA.QueryBuilder-org.mozilla.mentat.RustError.ByReference-">query_builder_execute(JNA.QueryBuilder, RustError.ByReference)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_execute_coll-org.mozilla.mentat.JNA.QueryBuilder-org.mozilla.mentat.RustError.ByReference-">query_builder_execute_coll(JNA.QueryBuilder, RustError.ByReference)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_execute_scalar-org.mozilla.mentat.JNA.QueryBuilder-org.mozilla.mentat.RustError.ByReference-">query_builder_execute_scalar(JNA.QueryBuilder, RustError.ByReference)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.html#query_builder_execute_tuple-org.mozilla.mentat.JNA.QueryBuilder-org.mozilla.mentat.RustError.ByReference-">query_builder_execute_tuple(JNA.QueryBuilder, RustError.ByReference)</a></span> - Method in interface org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.html" title="interface in org.mozilla.mentat">JNA</a></dt>
<dd>&nbsp;</dd>
<dt><span class="memberNameLink"><a href="../org/mozilla/mentat/JNA.QueryBuilder.html#QueryBuilder--">QueryBuilder()</a></span> - Constructor for class org.mozilla.mentat.<a href="../org/mozilla/mentat/JNA.QueryBuilder.html" title="class in org.mozilla.mentat">JNA.QueryBuilder</a></dt>
<dd>&nbsp;</dd>
</dl>
<a href="index-1.html">A</a>&nbsp;<a href="index-2.html">B</a>&nbsp;<a href="index-3.html">C</a>&nbsp;<a href="index-4.html">D</a>&nbsp;<a href="index-5.html">E</a>&nbsp;<a href="index-6.html">F</a>&nbsp;<a href="index-7.html">G</a>&nbsp;<a href="index-8.html">H</a>&nbsp;<a href="index-9.html">I</a>&nbsp;<a href="index-10.html">J</a>&nbsp;<a href="index-11.html">L</a>&nbsp;<a href="index-12.html">M</a>&nbsp;<a href="index-13.html">O</a>&nbsp;<a href="index-14.html">Q</a>&nbsp;<a href="index-15.html">R</a>&nbsp;<a href="index-16.html">S</a>&nbsp;<a href="index-17.html">T</a>&nbsp;<a href="index-18.html">U</a>&nbsp;<a href="index-19.html">V</a>&nbsp;</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../org/mozilla/mentat/package-summary.html">Package</a></li>
<li>Class</li>
<li><a href="../overview-tree.html">Tree</a></li>
<li><a href="../deprecated-list.html">Deprecated</a></li>
<li class="navBarCell1Rev">Index</li>
<li><a href="../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="index-13.html">Prev Letter</a></li>
<li><a href="index-15.html">Next Letter</a></li>
</ul>
<ul class="navList">
<li><a href="../index.html?index-files/index-14.html" target="_top">Frames</a></li>
<li><a href="index-14.html" target="_top">No&nbsp;Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../allclasses-noframe.html">All&nbsp;Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>

Some files were not shown because too many files have changed in this diff Show more