equal to the number of cores reported by Erlang (info.scheduler_threads) which
is either determined automatically by the Erlang BEAM runtime or via the +S
flag. The minimum num_queues is 2, so the minimum number of workers is 4. The
maximum number of workers is ASYNC_NIF_MAX_WORKER_QUEUE_SIZE (currently set to
128), but that would only happen if there were 64 cores (or you set +S 64:64 at
startup).
In the worst case is the request queue remains full and we loop between
the NIF and Erlang forever trying over and over to enqueue the request. If
that happens we shouldn't take schedulers offline as the NIF calls are fast
and we shouldn't run out of memory as that is bounded. CPU will show a lot
of activity, but progress will continue in Erlang.
you pass into it, that'd result in a non-zero (aka true) test when in fact
it's a problem if the argument isn't passed completely (however unlikely
that is).
enif_alloc_env() requires that later you enif_free_env() which I wasn't doing,
this seems to keep memory steady in test runs.
on the nif_unload path. Free up resources owned by wterl.c when
unloading. Continue to evolve the build script. Add to khash the ability
to create a hash that maps from a pointer to a value. There is still a segv
due to a race wterl.c:do_unload() which needs to be addressed.
memory allocated, zero'ed and free'd was consistent. Skip free'ing
async environment as Erlang will free that for us when we no longer
reference it. Fix the memory leak (or at least one of them) by no
longer copying the Uri into the hash table.
least 2 queues regardless. Fix a few race conditions (h/t Sue from
WiredTiger for some nice work) and cherry pick (for now) a commit that
fixes a bug I triggered and Keith fixed (in < 10min from report) related
to WiredTiger stats. Ensure that my guesstimate for session_max is no
larger than WiredTiger can manage. Continue to fiddle with the build
script.
for unused API calls. Add a fifo_q_full call to hide the details of that.
Alloc work queues along with the async_nif at the end of that memory block.
Fix a few places where things should be free'd and were not. Change enqueue
to return 0 when shutting down. Fix a race related to shutdown. When I use
gdb eunit calls ?cmd() seem to fail, so I've created rmdir:path() to replace
?cmd("rm -rf path") calls.
than those in the various queues. Reenable the (still failing)
truncate tests (because they don't SEGV anymore). Still might be
a memory leak, next up is valgrind.
the request queue over to a simple fifo queue which could (if needed)
be made lock-free. Cursor searches can optionally now specifiy that
they are mid-scan so as not to have their cursor handles reset every
call.
stop sending it over. Allow the user to set a "type" of storage
in their config to either 'table' for btree or 'lsm' for a log
structured merge tree. Various other cleanup.
dtor commented out, otherwise it would SEGV on GC. Some of the truncate
tests fail (race?) but don't SEGV, so that's not so bad. Fixed numerous
issues and also removed a mutex and queue of idle worker threads because
it isn't used so why bother with it?
* No longer expose WT_SESSION into Erlang at all as WT's model is to
maintain one WT_SESSION per-thread and we don't know anything about
threads in Erlang.
* async_nif worker threads don't pull both mutexes on every loop
when processing requests, only one
* async_nif provides a worker_id (int, 0 - MAX_WORKERS) within the
work block scope which we use to find our per-worker WT_SESSIONs
* async_nif maintained a number of globals which I'm moving into
the NIF's priv_data so that on upgrade/reload we have a fighting
chance to "Do the Right Thing(TM)".
* NIF Upgrades/Reloads started to plumb this in.
* Use a khash to manage the cache of URI->WT_CURSORs per WT_SESSION.
* Added start/stop positions into truncate call to allow for truncating
sub-ranges data.
* Lots of other details I'm sure I've forgotten and more left undone.
Search for "TODO:" or try to compile to see what's left, and then
there is a need for a lot more tests given all this new complexity.