Sync needs to operate over a "mentat transaction", not just a "db transaction".
This shuffle allows internal mentat crates to consume InProgress, which models
the concept of a "mentat transaction".
This was a work-around for Tolstoy, which couldn't gracefully handle
syncing a store with a bootstrap transaction. Tolstoy now handles
that single transaction, so this is no longer necessary.
This also patches our CI test script to only run "--feature sqlcipher"
tests on sub-crates which expose this feature (i.e. themselves rely on rusqlite).
For some reason, the converted doc test fails on Rust 1.25.0, while
working with other Rust versions. For simplicity, just convert it into
a regular test.
This is ready for Android Rust-y components: it no longer references Mentat.
The standalone toolchains are installed into
$ANDROID_NDK_TOOLCHAIN_DIR/arch-$ANDROID_NDK_API_VERSION.
This leverages JNA to test the Android SDK on the host machine using
Robolectric, which is significantly faster and easier to debug than
the equivalent on-device instrumentation tests.
We'll still want instrumentation smoke tests, but they won't need to
cover the entire range of the Android SDK.
Locally, I witnessed very slow tests. Profiling with Visual VM
revealed a lot of time spent in `wait`.
Digging in, we were trying to be clever, with a `wait(1000)/notify`
mechanism. However, there were never multiple threads in play, so the
waiter wasn't waiting when `notify` was invoked. That means we always
timed out. I think this never worked and using bare `wait()` would
have revealed that.
Anyway, `CountDownLatch` maintains the one bit of state (was I
notified) and generalizes smoothly to when we have threads.
Being able to derive partition map from partition definitions and current
state of the world (transactions), segmented by timelines, is useful
because it lets us not worry about keeping materialized partition maps
up-to-date - since there's no need for materialized partition maps at that point.
This comes in very handy when we start moving chunks of transactions off of our mainline.
Alternative to this work would look like materializing partition maps per timeline,
growing support for incremental "backwards update" of the materialized maps, etc.
Our core partitions are defined in 'known_parts' table during bootstrap,
and what used to be 'parts' table is a generated view that operates over
transactions to figure out partition index.
'parts' is defined for the main timeline. Querying parts for other timelines
or for particular timeline+tx combinations will look similar.