Per-append overhead is too high #62
Labels
No labels
bug
duplicate
enhancement
help wanted
invalid
question
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: greg/machi#62
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
On a chain with a single FLU, the per-append overhead is far too high for production use. Current write-once enforcement and checksum management tests are all OK, but basho_bench performance measurement shows that the serialization in both the process structure and eleveldb iterator use are both too large.
For example:
Then use this basho_bench config, which uses 25 concurrent clients to append 4KByte chunks to a single prefix. (I.e., worst-case serialization)
Here is the result running on a TRIM-enabled external Thunderbolt+SSD on my MacBook. Units are seconds (elapsed & window) or microseconds (all other columns).
Using 1 MByte chunks, this same-load-except-for-keygen-of-30-file-prefixes is happy to sustain about 340 MByte/sec of throughput. This is happy, since that's about the maximum throughput of the Thunderbolt+SSD device combination, but it also avoids the main serialization bottleneck(s) in the workload described in detail above.