Commit graph

660 commits

Author SHA1 Message Date
Scott Lystig Fritchie
4e7d1f2310 WIP: egadz, a refactoring mess, but finally AP mode not sucky 2015-08-20 17:32:46 +09:00
Scott Lystig Fritchie
a71e9543fe WIP: refactoring inner handling, but ... (more)
There are a couple of weird things in the snippet below (AP mode):

    22:32:58.209 b uses inner: [{epoch,136},{author,c},{mode,ap_mode},{witnesses,[]},{upi,[b,c]},{repair,[]},{down,[a]},{flap,undefined},{d,[d_foo1,{ps,[{a,b}]},{nodes_up,[b,c]}]},{d2,[]}] (outer flap epoch 136: {flap_i,{{{epk,115},{1439,904777,11627}},28},[a,{a,problem_with,b},{b,problem_with,a}],[{a,{{{epk,126},{1439,904777,149865}},16}},{b,{{{epk,115},{1439,904777,11627}},28}},{c,{{{epk,121},{1439,904777,134392}},15}}]}) (my flap {{epk,115},{1439,904777,11627}} 29 [{a,{{{epk,126},{1439,904777,149865}},28}},{b,{{{epk,115},{1439,904777,11627}},29}},{c,{{{epk,121},{1439,904777,134392}},26}}])

    22:32:58.224 c uses inner: [{epoch,136},{author,c},{mode,ap_mode},{witnesses,[]},{upi,[b,c]},{repair,[]},{down,[a]},{flap,undefined},{d,[d_foo1,{ps,[{a,b}]},{nodes_up,[b,c]}]},{d2,[]}] (outer flap epoch 136: {flap_i,{{{epk,115},{1439,904777,11627}},28},[a,{a,problem_with,b},{b,problem_with,a}],[{a,{{{epk,126},{1439,904777,149865}},16}},{b,{{{epk,115},{1439,904777,11627}},28}},{c,{{{epk,121},{1439,904777,134392}},15}}]}) (my flap {{epk,121},{1439,904777,134392}} 28 [{a,{{{epk,126},{1439,904777,149865}},28}},{b,{{{epk,115},{1439,904777,11627}},28}},{c,{{{epk,121},{1439,904777,134392}},28}}])

    CONFIRM by epoch inner 136 <<103,64,252,...>> at [b,c] []

    Priv1 [{a,{{132,<<"Cï|ÿzKX:Á"...>>},[a],[c],[b],[],false}},
           {b,{{127,<<185,139,3,2,96,189,...>>},[b,c],[],[a],[],false}},
           {c,{{133,<<145,71,223,6,177,...>>},[b,c],[a],[],[],false}}] agree false
    Pubs: [{a,136},{b,136},{c,136}]
    DoIt,

1. Both the "uses inner" messages and also the "CONFIRM by epoch inner 136"
   show that B & C are using the same inner projection.

   However, the 'Priv1' output shows b & c on different epochs, 127 & 133.
   Weird.

2. I've added an infinite loop, probably in this commit.  :-(
2015-08-18 22:35:57 +09:00
Scott Lystig Fritchie
9bf0eedb64 WIP: add the flapping manifesto, much is muchmuch better now 2015-08-18 20:49:36 +09:00
Scott Lystig Fritchie
e9268080af Finish/catchup commit from end of last week, silly me 2015-08-17 20:14:29 +09:00
Scott Lystig Fritchie
48e82ac1a4 WIP: use digraph to calculate better AllHosed 2015-08-14 22:29:20 +09:00
Scott Lystig Fritchie
20f2bf4b92 WIP: more ?REACT() tracing 2015-08-14 22:28:50 +09:00
Scott Lystig Fritchie
d2ce8f8447 Fix repair bug that has survived witness additions, oops 2015-08-14 19:30:36 +09:00
Scott Lystig Fritchie
9e02a1ea73 Add more ?REACT() tracing 2015-08-14 19:30:05 +09:00
Scott Lystig Fritchie
5aff775383 WIP: it's ugly, but CP+witnesses is mostly working? 2015-08-14 17:05:16 +09:00
Scott Lystig Fritchie
4e66d7bd91 WIP: keep CMode propagation consistent, but still violating CP transition safety 2015-08-14 00:12:13 +09:00
Scott Lystig Fritchie
14fad2d704 End-to-end chain state checking is still broken (more)
If we use verbose output from:

    machi_chain_manager1_converge_demo:t(3, [{private_write_verbose,true}, {consistency_mode, cp_mode}, {witnesses, [a]}]).

And use:

    tail -f typescript_file | egrep --line-buffered 'SET|attempted|CONFIRM'

... then we can clearly see a chain safety violation when moving from
epoch 81 -> 83.  I need to add more smarts to the safety checking,
both at the individual transition sanity check and at the converge_demo
overall rolling sanity check.

Key to output: CONFIRM by epoch {num} {csum} at {UPI} {Repairing}

    SET # of FLUs = 3 members [a,b,c]).
    CONFIRM by epoch 1 <<96,161,96,...>> at [a,b] [c]
    CONFIRM by epoch 5 <<134,243,175,...>> at [b,c] []
    CONFIRM by epoch 7 <<207,93,225,...>> at [b,c] []
    CONFIRM by epoch 47 <<60,142,248,...>> at [b,c] []
    SET partitions = [{c,b},{c,a}] (1 of 2) at {22,3,34}
    CONFIRM by epoch 81 <<223,58,184,...>> at [a,b] []
    SET partitions = [{b,c},{b,a}] (2 of 2) at {22,3,38}
    CONFIRM by epoch 83 <<33,208,224,...>> at [a,c] []
    SET partitions = []
    CONFIRM by epoch 85 <<173,179,149,...>> at [a,c] [b]
2015-08-13 22:16:28 +09:00
Scott Lystig Fritchie
e956c0b534 Fix (yet again) converge demo stable criteria 2015-08-13 21:26:07 +09:00
Scott Lystig Fritchie
f7121f8845 Witness + flapping seems to mostly work, yay! 2015-08-13 21:24:56 +09:00
Scott Lystig Fritchie
425b9c8f60 Merge slf/projection-conditional-write branch 2015-08-13 19:10:48 +09:00
Scott Lystig Fritchie
dcbc3b45ff C110: handle proj store private write failure when conditional fails 2015-08-13 18:45:15 +09:00
Scott Lystig Fritchie
9768f3c035 Projection store private write returns bad_arg if max_public_epochid is greater 2015-08-13 18:44:25 +09:00
Scott Lystig Fritchie
58d840ef7e Minor react changes, minor fix for return val of A50 2015-08-13 18:43:41 +09:00
Scott Lystig Fritchie
eecf5479ed Tweak stability criteria for converge demo 2015-08-13 16:18:33 +09:00
Scott Lystig Fritchie
d4275e5460 WIP: zerf_find_last_common() fix, eunit passes & very basic len=3 converge demo works 2015-08-13 15:41:18 +09:00
Scott Lystig Fritchie
0b8de235a9 WIP: zerf_find_last_common(), but is confused/broken by partial write @ private 2015-08-13 14:21:31 +09:00
Scott Lystig Fritchie
054397d187 WIP: find last common majority epoch 2015-08-12 17:53:39 +09:00
Scott Lystig Fritchie
d340b6a706 WIP: Duh, fix think-o in a40_latest_author_down() 2015-08-12 17:37:45 +09:00
Scott Lystig Fritchie
8e2a688526 WIP: cp_mode code from last Friday 2015-08-11 15:24:26 +09:00
Scott Lystig Fritchie
30a5652299 WIP: refining stable success for machi_chain_manager1_converge_demo, even better 2015-08-07 15:06:23 +09:00
Scott Lystig Fritchie
512251ac55 Adjust flap_limit constant 2015-08-07 12:29:10 +09:00
Scott Lystig Fritchie
c8ddce103e WIP: refining stable success for machi_chain_manager1_converge_demo 2015-08-07 12:28:51 +09:00
Scott Lystig Fritchie
3ca0f4491d WIP: always start chain manager with none projection 2015-08-06 19:24:14 +09:00
Scott Lystig Fritchie
0d7f6c8d7e WIP: chain transitions are now fully (?) aware of witness servers 2015-08-06 17:48:31 +09:00
Scott Lystig Fritchie
e9c4e2f98d WIP: rearrange CP mode projection calc 2015-08-06 15:22:04 +09:00
Scott Lystig Fritchie
82b6726261 Revert UPI [] -> [FirstRepairing] to commit 91496c6 2015-08-06 15:21:44 +09:00
Scott Lystig Fritchie
01da7a7046 TODO WTF was I thinking here??.... 2015-08-06 14:13:19 +09:00
Scott Lystig Fritchie
dcf532bafd WIP: Witness test expansion 2015-08-05 18:23:44 +09:00
Scott Lystig Fritchie
0f18ab8d20 Add better (?) timeout handling to machi_cr_client.erl gen_server calls 2015-08-05 17:48:06 +09:00
Scott Lystig Fritchie
e3d9ba2b83 WIP: Witness test expansion 2015-08-05 17:17:25 +09:00
Scott Lystig Fritchie
b21803a6c6 Fix witness calculation projections, part II 2015-08-05 16:05:03 +09:00
Scott Lystig Fritchie
f43a5ca96d Fix witness calculation projections, part I 2015-08-05 15:50:32 +09:00
Scott Lystig Fritchie
91496c656b Oops, fix PB stuff to add witnesses 2015-08-05 12:53:20 +09:00
Scott Lystig Fritchie
3f51357577 WIP: pre-travel code, not sure if good, check in for history 2015-07-30 13:12:08 -07:00
Scott Lystig Fritchie
aa1a31982a Add 'witnesses' to machi_projection:make_summary() 2015-07-30 13:11:43 -07:00
Scott Lystig Fritchie
6e521700bd WIP: Adding witness_smoke_test_ but it's broken (more)
So, the problem is that the chain manager isn't finishing repair
because UPI=[a], and a is a witness, and a can't do the list files etc etc
repair stuff that repairer FLUs need to do.

The best (?) way forward is to add some advance smarts to the
chain manager so that it doesn't propose a UPI of 100% witnesses?
2015-07-21 19:05:04 +09:00
Scott Lystig Fritchie
432190435e Add witness_mode to FLU 2015-07-21 17:29:33 +09:00
Scott Lystig Fritchie
6ed5767e06 Merge branch 'slf/chain-manager/cp-mode2' 2015-07-21 14:24:08 +09:00
Scott Lystig Fritchie
52dc40e1fe converge demo: converged iff all private projs are stable and all inner/outer 2015-07-21 14:19:08 +09:00
Scott Lystig Fritchie
88d3228a4c Fix various problems with repair not being aware of inner projections 2015-07-20 16:25:42 +09:00
Scott Lystig Fritchie
319397ecd2 machi_chain_manager1_pulse.erl tweaks 2015-07-20 15:08:03 +09:00
Scott Lystig Fritchie
9ae4afa58e Reduce chmgr verbosity a bit 2015-07-20 14:58:21 +09:00
Scott Lystig Fritchie
e14493373b Bugfix: add missing reset of not_sanes dictionary, fix comments 2015-07-20 14:04:25 +09:00
Scott Lystig Fritchie
f7ef8c54f5 Reduce # of assumptions made by ch_mgr + simulator for 'repair_airquote_done' 2015-07-19 13:32:55 +09:00
Scott Lystig Fritchie
b8c642aaa7 WIP: bugfix for rare flapping infinite loop (done^2 fix I hope)
How can even computer?

So, there's a flavor of the flapping infinite loop problem that
can happen without flapping being detected (by the existing
flapping detector, that is).  That detector relies on a series of
accepted projections to converge to a single projection repeated
X times.  However, it's possible to have a race with a simulated
repair "finishing" that causes a problem so that no more
projections are ever accepted.  Oops.

See also: new comments in do_react_to_env().
2015-07-19 00:43:10 +09:00
Scott Lystig Fritchie
57b7122035 Fix bug found by PULSE that's not directly chain manager-related (more)
PULSE managed to create a situation where machi_proxy_flu_client1
would appear to fail a remote attempt to write_projection.  The
client would retry, but the 1st attempt really did get through to
the server.  So, if we hit this case, we try to read the projection,
and if it's exactly equal to what we tried to write, we consider the
op a success.

Ditto for write_chunk.

Fix up eunit test to accomodate the change of semantics.
2015-07-18 23:22:14 +09:00