Commit graph

267 commits

Author SHA1 Message Date
Brian Grinstead
99b7e89116 Make struct Conn public. (#419) r=rnewman 2017-04-18 11:04:44 -07:00
Nick Alexander
5369f03464 Improve parsing of nested edn::ValueAndSpan streams. r=rnewman (#393)
* Pre: Expose more in edn.

* Pre: Make it easier to work with ValueAndSpan.

with_spans() is a temporary hack, needed only because I don't care to
parse the bootstrap assertions from text right now.

* Part 1a: Add `value_and_span` for parsing nested `edn::ValueAndSpan` instances.

I wasn't able to abstract over `edn::Value` and `edn::ValueAndSpan`;
there are multiple obstacles.  I chose to roll with
`edn::ValueAndSpan` since it exposes the additional span information
that we will want to form good error messages in the future.

* Part 1b: Add keyword_map() parsing an `edn::Value::Vector` into an `edn::Value::map`.

* Part 1c: Add `Log`/`.log(...)` for logging parser progress.

This is a terrible hack, but it sure helps to debug complicated nested
parsers.  I don't even know what a principled approach would look
like; since our parser combinators are so frequently expressed in
code, it's hard to imagine a data-driven interpreter that can help
debug things.

* Part 2: Use `value_and_span` apparatus in tx-parser/.

I break an abstraction boundary by returning a value column
`edn::ValueAndSpan` rather than just an `edn::Value`.  That is, the
transaction processor shouldn't care where the `edn::Value` it is
processing arose -- even we care to track that information we should
bake it into the `Entity` type.  We do this because we need to
dynamically parse the value column to support nested maps, and parsing
requires a full `edn::ValueAndSpan`.  Alternately, we could cheat and
fake the spans when parsing nested maps, but that's potentially
expensive.

* Part 3: Use `value_and_span` apparatus in query-parser/.

* Part 4: Use `value_and_span` apparatus in root crate.

* Review comment: Make Span and SpanPosition Copy.

* Review comment: nits.

* Review comment: Make `or` be `or_exactly`.

I baked the eof checking directly into the parser, rather than using
the skip and eof parsers.  I also took the time to restore some tests
that were mistakenly commented out.

* Review comment: Extract and use def_matches_* macros.

* Review comment: .map() as late as possible.
2017-04-06 10:06:28 -07:00
Richard Newman
a5023c70cb Use Rc for TypedValue, Variable, and query Ident keywords. (#395) r=nalexander
Part 1, core: use Rc for String and Keyword.
Part 2, query: use Rc for Variable.
Part 3, sql: use Rc for args in SQLiteQueryBuilder.
Part 4, query-algebrizer: use Rc.
Part 5, db: use Rc.
Part 6, query-parser: use Rc.
Part 7, query-projector: use Rc.
Part 8, query-translator: use Rc.
Part 9, top level: use Rc.
Part 10: intern Ident and IdentOrKeyword.
2017-04-02 21:38:36 -07:00
Richard Newman
97749833d0 Algebrize and translate numeric constraints. (#306) r=nalexander 2017-03-22 10:19:47 -07:00
Nick Alexander
15b4195a6e Schema alteration. Fixes #294 and #295. (#370) r=rnewman
* Pre: Don't retract :db/ident in test.

Datomic (and eventually Mentat) don't allow to retract :db/ident in
this way, so this runs afoul of future work to support mutating
metadata.

* Pre: s/VALUETYPE/VALUE_TYPE/.

This is consistent with the capitalization (which is "valueType") and
the other identifier.

* Pre: Remove some single quotes from error output.

* Part 1: Make materialized views be uniform [e a v value_type_tag].

This looks ahead to a time when we could support arbitrary
user-defined materialized views.  For now, the "idents" materialized
view is those datoms of the form [e :db/ident :namespaced/keyword] and
the "schema" materialized view is those datoms of the form [e a v]
where a is in a particular set of attributes that will become clear in
the following commits.

This change is not backwards compatible, so I'm removing the open
current (really, v2) test.  It'll be re-instated when we get to
https://github.com/mozilla/mentat/issues/194.

* Pre: Map TypedValue::Ref to TypedValue::Keyword in debug output.

* Part 3: Separate `schema_to_mutate` from the `schema` used to interpret.

This is just to keep track of the expected changes during
bootstrapping.  I want bootstrap metadata mutations to flow through
the same code path as metadata mutations during regular transactions;
by differentiating the schema used for interpretation from the schema
that will be updated I expect to be able to apply bootstrap metadata
mutations to an empty schema and have things like materialized views
created (using the regular code paths).

This commit has been re-ordered for conceptual clarity, but it won't
compile because it references the metadata module.  It's possible to
make it compile -- the functionality is there in the schema module --
but it's not worth the rebasing effort until after review (and
possibly not even then, since we'll squash down to a single commit to
land).

* Part 2: Maintain entids separately from idents.

In order to support historical idents, we need to distinguish the
"current" map from entid -> ident from the "complete historical" map
ident -> entid.  This is what Datomic does; in Datomic, an ident is
never retracted (although it can be replaced).  This approach is an
important part of allowing multiple consumers to share a schema
fragment as it migrates forward.

This fixes a limitation of the Clojure implementation, which did not
handle historical idents across knowledge base close and re-open.

The "entids" materialized view is naturally a slice of the "datoms"
table.  The "idents" materialized view is a slice of the
"transactions" table.  I hope that representing in this way, and
casting the problem in this light, might generalize to future
materialized views.

* Pre: Add DiffSet.

* Part 4: Collect mutations to a `Schema`.

I haven't taken your review comment about consuming AttributeBuilder
during each fluent function.  If you read my response and still want
this, I'm happy to do it in review.

* Part 5: Handle :db/ident and :db.{install,alter}/attribute.

This "loops" the committed datoms out of the SQL store and back
through the metadata (schema, but in future also partition map)
processor.  The metadata processor updates the schema and produces a
report of what changed; that report is then used to update the SQL
store.  That update includes:
- the materialized views ("entids", "idents", and "schema");
- if needed, a subset of the datoms themselves (as flags change).

I've left a TODO for handling attribute retraction in the cases that
it makes sense.  I expect that to be straight-forward.

* Review comment: Rename DiffSet to AddRetractAlterSet.

Also adds a little more commentary and a simple test.

* Review comment: Use ToIdent trait.

* Review comment: partially revert "Part 2: Maintain entids separately from idents."

This reverts commit 23a91df9c35e14398f2ddbd1ba25315821e67401.

Following our discussion, this removes the "entids" materialized
view.  The next commit will remove historical idents from the "idents"
materialized view.

* Post: Use custom Either rather than std::result::Result.

This is not necessary, but it was suggested that we might be paying an
overhead creating Err instances while using error_chain.  That seems
not to be the case, but this change shows that we don't actually use
any of the Result helper methods, so there's no reason to overload
Result.  This change might avoid some future confusion, so I'm going
to land it anyway.

Signed-off-by: Nick Alexander <nalexander@mozilla.com>

* Review comment: Don't preserve historical idents.

* Review comment: More prepared statements when updating materialized views.

* Post: Test altering :db/cardinality and :db/unique.

These tests fail due to a Datomic limitation, namely that the marker
flag :db.alter/attribute can only be asserted once for an attribute!
That is, [:db.part/db :db.alter/attribute :attribute] will only be
transacted at most once.  Since older versions of Datomic required the
:db.alter/attribute flag, I can only imagine they either never wrote
:db.alter/attribute to the store, or they handled it specially.  I'll
need to remove the marker flag system from Mentat in order to address
this fundamental limitation.

* Post: Remove some more single quotes from error output.

* Post: Add assert_transact! macro to unwrap safely.

I was finding it very difficult to track unwrapping errors while
making changes, due to an underlying Mac OS X symbolication issue that
makes running tests with RUST_BACKTRACE=1 so slow that they all time
out.

* Post: Don't expect or recognize :db.{install,alter}/attribute.

I had this all working... except we will never see a repeated
`[:db.part/db :db.alter/attribute :attribute]` assertion in the store!
That means my approach would let you alter an attribute at most one
time.  It's not worth hacking around this; it's better to just stop
expecting (and recognizing) the marker flags.  (We have all the data
to distinguish the various cases that we need without the marker
flags.)

This brings Mentat in line with the thrust of newer Datomic versions,
but isn't compatible with Datomic, because (if I understand correctly)
Datomic automatically adds :db.{install,alter}/attribute assertions to
transactions.

I haven't purged the corresponding :db/ident and schema fragments just
yet:
- we might want them back
- we might want them in order to upgrade v1 and v2 databases to the
  new on-disk layout we're fleshing out (v3?).

* Post: Don't make :db/unique :db.unique/* imply :db/index true.

This patch avoids a potential bug with the "schema" materialized view.
If :db/unique :db.unique/value implies :db/index true, then what
happens when you _retract_ :db.unique/value?  I think Datomic defines
this in some way, but I really want the "schema" materialized view to
be a slice of "datoms" and not have these sort of ambiguities and
persistent effects.  Therefore, to ensure that we don't retract a
schema characteristic and accidentally change more than we intended
to, this patch stops having any schema characteristic imply any other
schema characteristic(s).  To achieve that, I added an
Option<Unique::{Value,Identity}> type to Attribute; this helps with
this patch, and also looks ahead to when we allow to retract
:db/unique attributes.

* Post: Allow to retract :db/ident.

* Post: Include more details about invalid schema changes.

The tests use strings, so they hide the chained errors which do in
fact provide more detail.

* Review comment: Fix outdated comment.

* Review comment: s/_SET/_SQL_LIST/.

* Review comment: Use a sub-select for checking cardinality.

This might be faster in practice.

* Review comment: Put `attribute::Unique` into its own namespace.
2017-03-20 13:18:59 -07:00
Richard Newman
bf38105fef (#362) Part 4: handle unknown attributes by expanding type codes. r=nalexander
Also, don't run any SQL at all if an algebrized query is known to return no results.
2017-03-08 17:44:27 -08:00
Richard Newman
e898df8842 Implement basic query limits. (#361) r=nalexander 2017-03-08 17:41:42 -08:00
Richard Newman
70b112801c Implement projection and querying. (#353) r=nalexander
* Add a failing test for EDN parsing '…'.
* Expose a SQLValueType trait to get value_type_tag values out of a ValueType.
* Add accessors to FindSpec.
* Implement querying.
* Implement rudimentary projection.
* Export mentat_db::new_connection.
* Export symbols from mentat.
* Add rudimentary end-to-end query tests.
2017-03-06 14:40:10 -08:00
Nick Alexander
f86b24001f Add top-level Conn. Fixes #296. (#342) r=rnewman
* Add top-level `Conn`. Fixes #296.

This is a little different than the API rnewman and I originally
discussed in https://public.etherpad-mozilla.org/p/db-conn-thoughts.
A few notes:

- I was led to make a `Schema` instance the thing that is shared,
  rather than a `db::DB`.  It's possible that queries will want to
  know the current transaction at some point (to prevent races, or to
  query historical data), but that can be a future consideration.

- The generation number just allows for a cheap comparison.  I don't
  care to handle races to transact just yet; the long term plan might
  be to make embedding applications responsible for avoiding races, or
  we might handle queuing transactions and yielding report futures in
  Mentat itself.

- The sharing of the partition maps is a little more subtle than
  expected.  Partition maps are volatile: a successful Mentat
  transaction always advances the :db.part/tx partition, so it's not
  worth passing references around.  This means that consumers must
  clone in order to maintain just a single clone per transaction.

Clean some cruft.

* Review comments.
2017-03-03 15:03:59 -08:00
Richard Newman
d7f323d15d Wire in the start of querying and error_chain at top level. (#349) r=nalexander 2017-02-27 16:17:25 -08:00
Nick Alexander
dcd9bcb1ce Extract partial storage abstraction; use error-chain throughout. Fixes #328. r=rnewman (#341)
* Pre: Drop unneeded tx0 from search results.

* Pre: Don't require a schema in some of the DB code.

The idea is to separate the transaction applying code, which is
schema-aware, from the concrete storage code, which is just concerned
with getting bits onto disk.

* Pre: Only reference Schema, not DB, in debug module.

This is part of a larger separation of the volatile PartitionMap,
which is modified every transaction, from the stable Schema, which is
infrequently modified.

* Pre: Fix indentation.

* Extract part of DB to new SchemaTypeChecking trait.

* Extract part of DB to new PartitionMapping trait.

* Pre: Don't expect :db.part/tx partition to advance when tx fails.

This fails right now, because we allocate tx IDs even when we shouldn't.

* Sketch a db interface without DB.

* Add ValueParseError; use error-chain in tx-parser.

This can be simplified when
https://github.com/Marwes/combine/issues/86 makes it to a published
release, but this unblocks us for now.  This converts the `combine`
error type `ParseError<&'a [edn::Value]>` to a type with owned
`Vec<edn::Value>` collections, re-using `edn::Value::Vector` for
making them `Display`.

* Pre: Accept Borrow<Schema> instead of just &Schema in debug module.

This makes it easy to use Rc<Schema> or Arc<Schema> without inserting
&* sigils throughout the code.

* Use error-chain in query-parser.

There are a few things to point out here:

- the fine grained error types have been flattened into one crate-wide
  error type; it's pretty easy to regain the granularity as needed.

- edn::ParseError is automatically lifted to
  mentat_query_parser::errors::Error;

- we use mentat_parser_utils::ValueParser to maintain parsing error
  information from `combine`.

* Patch up top-level.

* Review comment: Only `borrow()` once.
2017-02-24 15:33:48 -08:00
Richard Newman
42f03f55a2 Stub out query algebrizer. 2017-02-15 16:01:22 -08:00
Richard Newman
2e303f4837 Stub out mentat::q_once. (#289) r=nalexander
* Leave a pointer to issue 288.
* Re-export mentat_db::types::DB from mentat_db.
* Parse EDN strings in the query parser.
* Export 'public' API from mentat_query_parser's top level.
* Stub out mentat::q_once.
2017-02-13 10:30:02 -08:00
Richard Newman
fcdf759399 Rename parser_utils to mentat_parser_utils, clean up imports. (#234) r=vporof 2017-02-02 08:18:04 -08:00
Nick Alexander
506c83c160 Implement basic logging infrastructure. (#205) r=nalexander,victorporof
Signed-off-by: Paul Lange <palango@gmx.de>
2017-01-26 10:43:48 -08:00
Richard Newman
2592506288 Implement parsing of simple :find expressions. (#196) r=nalexander
* Test the mentat_query directory on Travis.

* Export common types from edn.

This allows you to write

  use edn::{PlainSymbol,Keyword};

instead of

  use edn:🔣:{PlainSymbol,Keyword};

* Add an edn::Value::is_keyword predicate.

* Clean up query, preparing for query-parser.

* Make EDN keywords and symbols take Into<String> arguments.

* Implement parsing of simple :find lists.

* Rustfmt query-parser. Split find and query.

* Review comment: values_to_variables now returns a NotAVariableError on failure.

* Review comment: rename gimme to to_parsed_value.

* Review comment: add comments.
2017-01-25 14:06:19 -08:00
Brian Grinstead
71a30fe69f Add beginning of web server for the serve subcommand (#159) 2017-01-13 11:46:00 -08:00
Richard Newman
a152e60040 Read EDN keywords and symbols as rich types. Fixes #154. r=nalexander 2017-01-12 09:09:48 -08:00
Brian Grinstead
cd9517e5fd Run cargo fmt. r=me 2017-01-10 10:54:37 -08:00
Brian Grinstead
6d10774fc8 Move the bin to src and take on clap dependency for command line arg parsing. Fixes #150. r=rnewman 2017-01-10 10:53:34 -08:00
Richard Newman
daddfd3e0f Add query sub-crate, implementing more of the beginnings of the query language. 2017-01-09 12:31:57 -08:00
Richard Newman
476f04e27b Implement a rudimentary Keyword struct and the beginnings of ident/entid. 2017-01-09 12:31:56 -08:00
Richard Newman
7a4c75ba44 Rename to Project Mentat (src). 2017-01-06 17:20:20 -08:00
Brian Grinstead
8a52015422 Take on rusqlite dependency. Fixes #148. r=rnewman 2017-01-06 10:24:04 -06:00
Brian Grinstead
9b8257a725 Create a new crate for the query parser. Fixes #138. r=rnewman
Starting to work out the project layout for sub-crates.  The crate inside query-parser/ is "datomish-query-parser" and the core code in src/ depends on it.
2016-12-16 18:43:47 -08:00
Brian Grinstead
5ac47fd6ff Add a stub CLI tool and run tests on it. Fixes #136. r=rnewman 2016-12-16 14:26:10 -08:00
Brian Grinstead
973c32ff77 Update test boilerplate for running on travis (#134). r=rnewman
* Include a local and external test.
* Add license blocks.
2016-12-16 11:50:08 -08:00
Richard Newman
f8682a65fa Initial Rust commit.
If you want to go fast, go alone. If you want to go far, go together.
2016-12-16 10:39:08 -08:00
Richard Newman
cbd278dd7e Remove Clojure and JS application code. 2016-12-16 10:32:23 -08:00
Richard Newman
9cc26616a9 Implement unified setup/bootstrapping, bootstrapping new databases in a single transaction. Fixes #125. 0.3.7. 2016-12-16 10:25:17 -08:00
Richard Newman
8e16bee201 Pass existing idents to datoms->schema-fragment, allowing the 'upgrade' of an existing ident to an attribute. 2016-12-16 10:25:17 -08:00
Richard Newman
103a86f440 Add a :none migration for schema management. Fixes #113. r=grisha
This allows for code to run before and after a schema fragment is
added for the first time.

The anticipated use for this is twofold:

1. To do initial setup, e.g., defining global entities.
2. To 'adopt' unmanaged attributes already defined in the store.

This 'pre' would manually alter or retract attributes so that the
transact of the new schema datoms can complete.

For example, if properties :foo/bar and :foo/baz will be unchanged,
but :noo/zob needs to change from a string to an integer, the :none
pre-function can alter the ident, and the :none post-function can
migrate and clean up.
2016-11-23 17:06:04 -08:00
Richard Newman
f5f57da113 Rename datomish.places.import to datomish.places.importer to silence warnings. 2016-11-23 09:02:10 -08:00
Richard Newman
6f006f247d Bump to 0.3.1 to bump a dependency. 2016-11-22 11:40:37 -08:00
Richard Newman
99e7fafd1b Change license to Apache. Fixes #74. 2016-11-22 11:40:37 -08:00
Richard Newman
7df48d0599 Add missing Tufte stub function. 2016-11-17 13:28:54 -08:00
Richard Newman
d568977fa9 Implement schema management proposal. Fixes #95. 2016-11-16 21:04:13 -08:00
Richard Newman
451f13a053 Add :db.schema/version and :db.schema/attribute. 2016-11-16 21:04:13 -08:00
Richard Newman
3212be565c Allow callers to run functions within the scope of a transaction.
This generalizes the transactor loop to allow callers to run
an arbitrary function within an `in-transaction!` body.

Combined with exposing `<report-transact-tx-data!`, this allows
an admittedly sophisticated consumer to conditionally query and
transact in a consistent way -- for example, cleaning up inconsistent
data then transacting a new schema version.
2016-11-16 21:04:13 -08:00
Richard Newman
bd0a56e501 Expose datomish.schema/validate-schema so that schema management can use it. 2016-11-16 21:04:13 -08:00
Richard Newman
5fa26c58a8 Expose id-literal? in the API. 2016-11-16 21:04:13 -08:00
Richard Newman
8e6f8399ae Add <??, a null-safe variant of <?. 2016-11-16 21:04:13 -08:00
Richard Newman
7e50528788 Add repeated-keys utility. 2016-11-16 21:04:13 -08:00
Richard Newman
30023dd939 Move test helpers so they're not included in the built output. 2016-11-16 21:04:12 -08:00
Richard Newman
8ad434574e Remove dependency on Tufte. Fixes #109. 2016-11-16 21:03:59 -08:00
Richard Newman
9d361055d3 Implement schema alteration. Fixes #78.
Altering uniqueness and cardinality attributes works, with the exception
of enabling uniqueness from nothing.

:db/noHistory and :db/isComponent changes are implemented but untested,
and aren't really supported by Datomish anyway.
2016-10-24 20:01:44 -07:00
Richard Newman
46269fe720 Add db.alter/attribute to the bootstrap schema. 2016-10-24 20:01:44 -07:00
Richard Newman
9d81abace5 Implement ident renaming. Fixes #103. 2016-10-24 20:01:44 -07:00
Richard Newman
3cfccc4b81 Implement ground. Fixes #99. 2016-10-19 12:54:05 -07:00
Nick Alexander
3670c5cce7 Review comment: save allocations when evolving. 2016-10-14 10:20:43 -07:00
Nick Alexander
679ab8cf7d Review comment: explain why upserts between generational steps don't conflict. 2016-10-14 10:20:43 -07:00
Nick Alexander
caa9d2d7cb Review comment: prefer dissoc and update to destructuring. 2016-10-14 10:20:43 -07:00
Nick Alexander
00c72f9188 Review comment: fix "Like {...}" map examples. 2016-10-14 10:20:43 -07:00
Nick Alexander
885a816812 Review comment: style nits. 2016-10-14 10:20:43 -07:00
Nick Alexander
39c909ec32 Rewrite resolve-id-literals to use bulk <avs. (#88)
The metaphor we use is that of "evolution", where each "evolutionary
step" contains a number of different "generations".  Entities in the
process of being resolved are increasingly "evolved" into simpler
generations, until no further evolution is possible.
2016-10-14 10:20:43 -07:00
Nick Alexander
1c83287fcf Pre: Make <avs handle fulltext datoms correctly.
The test would fail because we would have an [a v] pair with a string
value, but we were looking for the fulltext rowid in <avs.  Using
all_datoms correctly looks up the string value, at the cost of crippling
the speed of <avs.
2016-10-14 10:20:43 -07:00
Nick Alexander
60c7db4301 Pre: Make testing consistent by sorting fulltext values before inserting.
This sorts fulltext values inserted in a single transaction, not across
transactions.  This makes the rowids assigned in the fulltext_values
table internally consistent, even as the order of entities and datoms
changes (as the transaction applying algorithm evolves over time).  The
test changes simply make the fulltext values sort easily.

In theory, these fulltext values could be very large, and sorting might
be very expensive.  In practice, we expect values to differ in their
first few characters, so that this is efficient (i.e., proportional to
the number of fulltext values inserted and not their size).
2016-10-14 10:20:43 -07:00
Nick Alexander
bc011bbf43 Pre: Add util/group-by-kv. 2016-10-14 10:20:43 -07:00
Richard Newman
feebfd09da Generate known type for the entity in a fulltext expression, and add a test. Fixes #85. 2016-10-13 18:19:29 -07:00
Nick Alexander
a4dd7e4e9c Review comment: make a large-ish dropping buffer for JS listen! consumers. 2016-10-13 16:11:22 -07:00
Nick Alexander
032bfafec2 Review comment: fail pending transactions after closing connection.
This is pretty difficult to test robustly, but here's a stab at it.
2016-10-13 16:11:22 -07:00
Nick Alexander
f02d508370 Review comment: ensure <transact! after <close is rejected. 2016-10-13 16:11:22 -07:00
Nick Alexander
b20c70fc2a Review comment: ensure report is non-nil after in-transaction!. 2016-10-13 16:11:22 -07:00
Nick Alexander
cea0e3d60f Review comment: return pair-chan; accept a result chan and close? flag. 2016-10-13 16:11:22 -07:00
Nick Alexander
e5917406b4 Add {un}listen{-chan}! to connection. (#61) 2016-10-13 16:11:16 -07:00
Nick Alexander
a8ad79d0e6 Make <transact! run in a critical section. (#80) 2016-10-11 20:32:35 -07:00
Nick Alexander
2081ca4563 Pre: Add unlimited-buffer and unblocking-chan?. 2016-10-11 20:32:35 -07:00
Nick Alexander
e1b1abe2de Pre: clarify comments. 2016-10-11 20:32:35 -07:00
Richard Newman
c36be57018 Expose a 'tempid' function on transaction results, because JS object lookup doesn't work for TempIds. 2016-10-07 20:12:17 -07:00
Richard Newman
5b6000003d Support order-by query option from JS. 2016-10-07 20:12:17 -07:00
Richard Newman
e89544beba Implement all four find specs. Fixes #38. r=nalexander 2016-10-07 11:02:35 -07:00
Richard Newman
021f2be620 Review comment: add comment about cljify. 2016-10-05 14:07:07 -07:00
Richard Newman
61757e271c Review comment: use datomish.api where possible. 2016-10-05 14:06:36 -07:00
Richard Newman
c7d0a8596b Part 2: extend 'cljify' implementation to round-trip records like TempId. 2016-10-05 12:54:26 -07:00
Richard Newman
0b6ac81ed5 Part 1: extend 'db' JS object with more useful methods. 2016-10-05 12:53:57 -07:00
Richard Newman
b777445ebf Sort variable sets to make tests consistent across platforms. 2016-10-04 11:38:14 -07:00
Nick Alexander
3cd64fb4d8 Review comments. 2016-09-30 17:00:27 -07:00
Nick Alexander
611d44fcce Process lookup-refs in batches. Fixes #25.
This uses a common table expression and multiple SQL calls rather than a
temporary table, since transactions with huge numbers of distinct
lookup-refs are likely to be very rare.

We mark lookup-refs with `lookup-ref`, which is a little awkward because
binding `(let [[a v] lookup-ref] ...)` doesn't directly work, but avoids
some ambiguity present in Datomic and DataScript around interpreting
lookup-refs as multiple value lists.  (Which bit the tests in an earlier
version of this patch!)
2016-09-30 16:47:04 -07:00
Nick Alexander
20531c1789 Pre: Don't insert nil tx where it should not be. 2016-09-30 16:47:04 -07:00
Nick Alexander
c46f0eb8ae Part 2: Get rid of {0, 1} -> {2, 3} mapping for added/added0. Fixes #28.
Now that we copying from tx_lookup_before -> tx_lookup_after, we don't
need to avoid duplicating rows.
2016-09-30 16:47:04 -07:00
Nick Alexander
da1250d210 Part 1: Separate tx_lookup into tx_lookup_before and tx_lookup_after. 2016-09-30 16:47:04 -07:00
Richard Newman
a7d6a37cfc Update comment in cc.cljc. 2016-09-29 15:49:30 -07:00
Richard Newman
6ab93208cb Part 2: implement complex 'or' translation. Fixes #57. r=nalexander
We implement sql-projection-for-simple-variable-list to allow us to add
a projection to subqueries.
2016-09-29 15:46:15 -07:00
Richard Newman
b9b9c37dfa Part 1: pass in :select when creating a partial subquery from a CC. 2016-09-29 15:44:16 -07:00
Richard Newman
1296b8090f Allow sets of attributes in fulltext expressions. Fixes #54. r=nalexander 2016-09-26 16:34:37 -07:00
Richard Newman
9587311412 Include deps.cljs giving externs for Node.js consumers; normalize build output.
cljsbuild using leiningen projects that depend on Datomish will
automatically include the externs.
2016-09-22 16:24:37 -07:00
Richard Newman
d0a04a5e56 Review comment: extracted shared go-promise. 2016-09-22 16:24:37 -07:00
Richard Newman
17d7eaec7b Add a babelified test file, Webpack the add-on, and make the JS API work.
We concatenate a simple setTimeout monkeypatch onto the add-on itself.
2016-09-22 15:59:15 -07:00
Richard Newman
360f7622e8 Add handling of simple schemas. Fixes #53. 2016-09-22 15:59:15 -07:00
Richard Newman
4f37a86039 Use cljify in promise-sqlite. 2016-09-22 15:59:15 -07:00
Richard Newman
ea027e8cea Implement cljify. 2016-09-22 15:59:15 -07:00
Richard Newman
1d53d547b8 Externs. 2016-09-22 12:43:36 -07:00
Richard Newman
330433a45c Add externs file for Node's use of promise_sqlite. 2016-09-22 12:43:35 -07:00
Nick Alexander
1a30306314 Move datomish.api into exported namespace. 2016-09-19 12:03:09 -07:00
Richard Newman
b5aec2e890 Move src-node and src-browser into subdirectories of src. 2016-09-09 12:07:03 -07:00
Richard Newman
418bb34d57 Add is-node?. 2016-09-08 19:11:44 -07:00
Richard Newman
5ccc725b56 Flesh out JS API. 2016-09-08 19:11:44 -07:00
Richard Newman
9e4e95ce51 Default SQLite's user_version to zero.
I saw nil here with Sqlite.jsm.
2016-09-08 19:11:44 -07:00
Richard Newman
cc25ce33e2 Move platform-specific code into src-node. 2016-09-08 19:11:44 -07:00
Richard Newman
9dbda3d9d8 Pre: remove exec_repl.cljc. 2016-09-08 19:04:15 -07:00