Previously if a lookup of an offset and size tuple
from the process dictionary returned a checksum that
was different from the one we were expecting, it
would trigger a case clause exception. We should
handle that case a little more elegantly.
How can even computer?
So, there's a flavor of the flapping infinite loop problem that
can happen without flapping being detected (by the existing
flapping detector, that is). That detector relies on a series of
accepted projections to converge to a single projection repeated
X times. However, it's possible to have a race with a simulated
repair "finishing" that causes a problem so that no more
projections are ever accepted. Oops.
See also: new comments in do_react_to_env().
PULSE managed to create a situation where machi_proxy_flu_client1
would appear to fail a remote attempt to write_projection. The
client would retry, but the 1st attempt really did get through to
the server. So, if we hit this case, we try to read the projection,
and if it's exactly equal to what we tried to write, we consider the
op a success.
Ditto for write_chunk.
Fix up eunit test to accomodate the change of semantics.
{sigh} This is a correction to a think-o error in the
"WIP: bugfix for rare flapping infinite loop (better fix I hope)"
bugfix that I thought I had finished in the slf/chain-manager/cp-mode
branch.
Silly me, the test for myself as the author of the not_sane transition was
wrong: we don't do that kind of insanity, other nodes might, though. ^_^
Due to changes by slf/chain-manager/cp-mode branch, there are
no longer extraneous epoch changes by "larger" authors that
re-suggest the same UPI+Repairing just because their author rank
is very slightly higher than the current epoch. Thus the
partial_stop_restart2() test only needs to deal with one epoch
change instead of the original two.
%% So, I'd tried this kind of "if everyone is doing it, then we
%% 'agree' and we can do something different" strategy before,
%% and it didn't work then. Silly me. Distributed systems
%% lesson #823: do not forget the past. In a situation created
%% by PULSE, of all=[a,b,c,d,e], b & d & e were scheduled
%% completely unfairly. So a & c were the only authors ever to
%% suceessfully write a suggested projection to a public store.
%% Oops.
%%
%% So, we're going to keep track in #ch_mgr state for the number
%% of times that this insane judgement has happened.
I'll run a set of PULSE tests (Cmd_e of the 'regression' style)
to try to confirm a fix for this pernicious little thing.
Final (?) part of the fix: add myself to SeenFlappers in
react_to_env_A30().
Run via:
env PULSE_NOSHRINK=yes PULSE_SKIP_NEW=yes PULSE_TIME=900 make pulse
So, this one hangs here:
tick-<0.991.0>,dump_state(){prop,machi_chain_manager1_pulse,358,<0.891.0>}
At machi_chain_manager1_pulse.erl line 358, that's after the return
of run_commands(). The next verbose message should come from line
362, after the return of pulse:run(), but that message never appears.
My laptop CPU is really busy (fans running, case is hot), but both
console & disterl aren't available now, so no idea why, alas.
Ah, when I run with a console available and then run Redbug, there is
zero activity calling both machi_chain_manager1_pulse:'_' and
machi_chain_manager1:'_'
This may be related to a bad/ugly shutdown? In both hang cases,
I see at least one SASL error message such as the one below ...
BUT! There should be erlang:display() messages from the shutdown_hard()
function, which does some exit(Pid, kill) calls, but there is no output
from them! So, the killing is coming from some kind of PULSE-initiated
process shutdown/cleanup/??
=SUPERVISOR REPORT==== 16-Jul-2015::20:24:31 ===
Supervisor: {local,machi_sup}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.200.0>},
{name,machi_flu_sup},
{mfargs,{machi_flu_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,supervisor}]