Adding forward iteration (Really simple implementation). #36
Loading…
Reference in a new issue
No description provided.
Delete branch "vwwv/master"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi !
I made a patch to add iteration in a similar fashion
ets:next/2
works, mainly because I need it, but I think it might be a good idea to add this, or something like this.The implementation is rather simple and does not take advantage of the keys being next to each other from call to call ; but I create the type
move()
that currently just containskey
, but that can could later be extended to add the means to implement a more efficient solution (if you guys think that's something interesting I can try to implement it that way).Btw, thanks a lot for the library, is really useful, and being 100% erlang did really help when deploying :) .
Thanks. Looks good; what are you using it for?
Do you mean the database? It is for a project related to compute rankings over games played in a online video game (storing already hundreds of millions games, adding a bunch of thousands /seconds ).
Btw, I've tried further this pull request, and even though the result themselves seem to be ok, the process using it started getting odd messages once it tries to query the key next to last one: it returns
end_of_table
as it should, but then receives messages about unrelated node down notifications, and a'$cast' msg. Am I missing something about how
fold_range` should be used?No, you're not missing anything; maybe you can tell me how to reproduce? The teardown after completed iteration (even for just limit=1) should not cause warning messages.
It does not happen always, I only started to get them after a while under heavy usage of it...and only got a few messages. Also got other problems, after being after a good ram usage, during sometimes even hours, suddenly it stops gc binaries and quickly consume all computer's memory; while profiling I saw that when this happen there's a process accumulating hundreds of thousands of messages. These issues might or might not be related I guess.
I'll try to find out how to reproduce in a simple way.
When this happen to me, I had 3 different hannoidb running at the same time (does this violate some constraint?) and was randomly inserting on them, then taking the smallest elements (using that with a fold of size 1 ).
Also, this happened using the last OTR version, might it be that some component from any of its dependencies behaves now differently? (I had to modify edown cause no longer did compile, but seems not to be related).
View command line instructions
Checkout
From your project repository, check out a new branch and test the changes.Merge
Merge the changes and update on Forgejo.