[meta] Materialized views and caches #161
Labels
No labels
A-build
A-cli
A-core
A-design
A-edn
A-ffi
A-query
A-sdk
A-sdk-android
A-sdk-ios
A-sync
A-transact
A-views
A-vocab
P-Android
P-desktop
P-iOS
bug
correctness
dependencies
dev-ergonomics
discussion
documentation
duplicate
enhancement
enquiry
good first bug
good first issue
help wanted
hygiene
in progress
invalid
question
ready
size
speed
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: greg/mentat#161
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
One of the potential value propositions for Mentat is semi-automatic materialization of views and caches, offering applications both flexible storage and fast querying.
I see applications working at three levels:
The most general, and also the slowest, is direct query access to the database. This would be used for development, for non-speed-critical queries, and for queries where speed is willingly traded for space.
The next is materialization of views on disk. This approach would be taken for queries that need to be faster, but either need to interface with other query parts, or don't need to be in memory.
Both simple attribute lookups and complex queries could be materialized; the difference is simply in the width of the table and the columns they have.
The final level is in-memory caching. We already do this for schema data, because we need it synchronously to algebrize queries and process transactions. Applications, too, might need fast synchronous access to a subset of data — lookup tables, URLs, scores. This approach works best for lookups that need to be very fast but don't take up too much memory.
The interaction with cached data should explicitly define which kind of cache is expected: write-through, write-back, write-around.
Exploring materialized views will involve:
find-rel
query. This will need to live alongside the data.q_many
to avoid the overhead of algebrizing on each write.INSERT
/UPDATE
, rather than round-tripping through in-memory projection.Exploring caching will involve:
Thoughts, crew?