From 03b118b52ccd3cf67df616db7c47aef4662a284c Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 21 Dec 2015 14:46:17 +0900 Subject: [PATCH 01/53] Clustering API changes in various docs * name-game-sketch.org * flu-and-chain-lifecycle.org * FAQ.md I've left out changes to the two design docs for now; most of their respective texts omit multiple chain scenarios entirely, so there isn't a huge amount to change. --- FAQ.md | 166 +++--- doc/README.md | 6 +- doc/cluster-of-clusters/migration-3to4.png | Bin 7910 -> 0 bytes doc/cluster-of-clusters/migration-4.png | Bin 7851 -> 0 bytes doc/cluster-of-clusters/name-game-sketch.org | 479 ------------------ .../migration-3to4.fig | 20 +- doc/cluster/migration-3to4.png | Bin 0 -> 7756 bytes doc/cluster/migration-4.png | Bin 0 -> 7607 bytes doc/cluster/name-game-sketch.org | 469 +++++++++++++++++ doc/flu-and-chain-lifecycle.org | 43 +- src/machi_lifecycle_mgr.erl | 2 +- 11 files changed, 593 insertions(+), 592 deletions(-) delete mode 100644 doc/cluster-of-clusters/migration-3to4.png delete mode 100644 doc/cluster-of-clusters/migration-4.png delete mode 100644 doc/cluster-of-clusters/name-game-sketch.org rename doc/{cluster-of-clusters => cluster}/migration-3to4.fig (85%) create mode 100644 doc/cluster/migration-3to4.png create mode 100644 doc/cluster/migration-4.png create mode 100644 doc/cluster/name-game-sketch.org diff --git a/FAQ.md b/FAQ.md index f2e37c1..6d43e8f 100644 --- a/FAQ.md +++ b/FAQ.md @@ -11,14 +11,14 @@ + [1 Questions about Machi in general](#n1) + [1.1 What is Machi?](#n1.1) - + [1.2 What is a Machi "cluster of clusters"?](#n1.2) - + [1.2.1 This "cluster of clusters" idea needs a better name, don't you agree?](#n1.2.1) - + [1.3 What is Machi like when operating in "eventually consistent" mode?](#n1.3) - + [1.4 What is Machi like when operating in "strongly consistent" mode?](#n1.4) - + [1.5 What does Machi's API look like?](#n1.5) - + [1.6 What licensing terms are used by Machi?](#n1.6) - + [1.7 Where can I find the Machi source code and documentation? Can I contribute?](#n1.7) - + [1.8 What is Machi's expected release schedule, packaging, and operating system/OS distribution support?](#n1.8) + + [1.2 What is a Machi chain?](#n1.2) + + [1.3 What is a Machi cluster?](#n1.3) + + [1.4 What is Machi like when operating in "eventually consistent" mode?](#n1.4) + + [1.5 What is Machi like when operating in "strongly consistent" mode?](#n1.5) + + [1.6 What does Machi's API look like?](#n1.6) + + [1.7 What licensing terms are used by Machi?](#n1.7) + + [1.8 Where can I find the Machi source code and documentation? Can I contribute?](#n1.8) + + [1.9 What is Machi's expected release schedule, packaging, and operating system/OS distribution support?](#n1.9) + [2 Questions about Machi relative to {{something else}}](#n2) + [2.1 How is Machi better than Hadoop?](#n2.1) + [2.2 How does Machi differ from HadoopFS/HDFS?](#n2.2) @@ -28,13 +28,15 @@ + [3 Machi's specifics](#n3) + [3.1 What technique is used to replicate Machi's files? Can other techniques be used?](#n3.1) + [3.2 Does Machi have a reliance on a coordination service such as ZooKeeper or etcd?](#n3.2) - + [3.3 Is it true that there's an allegory written to describe humming consensus?](#n3.3) - + [3.4 How is Machi tested?](#n3.4) - + [3.5 Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks](#n3.5) - + [3.6 Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device?](#n3.6) - + [3.7 What language(s) is Machi written in?](#n3.7) - + [3.8 Does Machi use the Erlang/OTP network distribution system (aka "disterl")?](#n3.8) - + [3.9 Can I use HTTP to write/read stuff into/from Machi?](#n3.9) + + [3.3 Are there any presentations available about Humming Consensus](#n3.3) + + [3.4 Is it true that there's an allegory written to describe Humming Consensus?](#n3.4) + + [3.5 How is Machi tested?](#n3.5) + + [3.6 Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks](#n3.6) + + [3.7 Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device?](#n3.7) + + [3.8 What language(s) is Machi written in?](#n3.8) + + [3.9 Can Machi run on Windows? Can Machi run on 32-bit platforms?](#n3.9) + + [3.10 Does Machi use the Erlang/OTP network distribution system (aka "disterl")?](#n3.10) + + [3.11 Can I use HTTP to write/read stuff into/from Machi?](#n3.11) @@ -48,7 +50,7 @@ Very briefly, Machi is a very simple append-only file store. Machi is "dumber" than many other file stores (i.e., lacking many features -found in other file stores) such as HadoopFS or simple NFS or CIFS file +found in other file stores) such as HadoopFS or a simple NFS or CIFS file server. However, Machi is a distributed file store, which makes it different (and, in some ways, more complicated) than a simple NFS or CIFS file @@ -82,45 +84,39 @@ For a much longer answer, please see the [Machi high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-machi.pdf). -### 1.2. What is a Machi "cluster of clusters"? +### 1.2. What is a Machi chain? -Machi's design is based on using small, well-understood and provable -(mathematically) techniques to maintain multiple file copies without -data loss or data corruption. At its lowest level, Machi contains no -support for distribution/partitioning/sharding of files across many -servers. A typical, fully-functional Machi cluster will likely be two -or three machines. +A Machi chain is a small number of machines that maintain a common set +of replicated files. A typical chain is of length 2 or 3. For +critical data that must be available despite several simultaneous +server failures, a chain length of 6 or 7 might be used. -However, Machi is designed to be an excellent building block for -building larger systems. A deployment of Machi "cluster of clusters" -will use the "random slicing" technique for partitioning files across -multiple Machi clusters that, as individuals, are unaware of the -larger cluster-of-clusters scheme. + +### 1.3. What is a Machi cluster? -The cluster-of-clusters management service will be fully decentralized +A Machi cluster is a collection of Machi chains that +partitions/shards/distributes files (based on file name) across the +collection of chains. Machi uses the "random slicing" algorithm (a +variation of consistent hashing) to define the mapping of file name to +chain name. + +The cluster management service will be fully decentralized and run as a separate software service installed on each Machi cluster. This manager will appear to the local Machi server as simply -another Machi file client. The cluster-of-clusters managers will take +another Machi file client. The cluster managers will take care of file migration as the cluster grows and shrinks in capacity and in response to day-to-day changes in workload. -Though the cluster-of-clusters manager has not yet been implemented, +Though the cluster manager has not yet been implemented, its design is fully decentralized and capable of operating despite -multiple partial failure of its member clusters. We expect this +multiple partial failure of its member chains. We expect this design to scale easily to at least one thousand servers. Please see the [Machi source repository's 'doc' directory for more details](https://github.com/basho/machi/tree/master/doc/). - -#### 1.2.1. This "cluster of clusters" idea needs a better name, don't you agree? - -Yes. Please help us: we are bad at naming things. -For proof that naming things is hard, see -[http://martinfowler.com/bliki/TwoHardThings.html](http://martinfowler.com/bliki/TwoHardThings.html) - - -### 1.3. What is Machi like when operating in "eventually consistent" mode? + +### 1.4. What is Machi like when operating in "eventually consistent" mode? Machi's operating mode dictates how a Machi cluster will react to network partitions. A network partition may be caused by: @@ -143,13 +139,13 @@ consistency mode during and after network partitions are: together from "all sides" of the partition(s). * Unique files are copied in their entirety. * Byte ranges within the same file are merged. This is possible - due to Machi's restrictions on file naming (files names are - alwoys assigned by Machi servers) and file offset assignments - (byte offsets are also always chosen by Machi servers according - to rules which guarantee safe mergeability.). + due to Machi's restrictions on file naming and file offset + assignment. Both file names and file offsets are always chosen + by Machi servers according to rules which guarantee safe + mergeability. - -### 1.4. What is Machi like when operating in "strongly consistent" mode? + +### 1.5. What is Machi like when operating in "strongly consistent" mode? The consistency semantics of file operations while in strongly consistency mode during and after network partitions are: @@ -167,13 +163,13 @@ consistency mode during and after network partitions are: Machi's design can provide the illusion of quorum minority write availability if the cluster is configured to operate with "witness -servers". (This feaure is not implemented yet, as of June 2015.) +servers". (This feaure partially implemented, as of December 2015.) See Section 11 of [Machi chain manager high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-chain-mgr.pdf) for more details. - -### 1.5. What does Machi's API look like? + +### 1.6. What does Machi's API look like? The Machi API only contains a handful of API operations. The function arguments shown below use Erlang-style type annotations. @@ -204,15 +200,15 @@ level" internal protocol are in a [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview) definition at [./src/machi.proto](./src/machi.proto). - -### 1.6. What licensing terms are used by Machi? + +### 1.7. What licensing terms are used by Machi? All Machi source code and documentation is licensed by [Basho Technologies, Inc.](http://www.basho.com/) under the [Apache Public License version 2](https://github.com/basho/machi/tree/master/LICENSE). - -### 1.7. Where can I find the Machi source code and documentation? Can I contribute? + +### 1.8. Where can I find the Machi source code and documentation? Can I contribute? All Machi source code and documentation can be found at GitHub: [https://github.com/basho/machi](https://github.com/basho/machi). @@ -226,8 +222,8 @@ ideas for improvement, please see our contributing & collaboration guidelines at [https://github.com/basho/machi/blob/master/CONTRIBUTING.md](https://github.com/basho/machi/blob/master/CONTRIBUTING.md). - -### 1.8. What is Machi's expected release schedule, packaging, and operating system/OS distribution support? + +### 1.9. What is Machi's expected release schedule, packaging, and operating system/OS distribution support? Basho expects that Machi's first major product release will take place during the 2nd quarter of 2016. @@ -305,15 +301,15 @@ file's writable phase). Does not have any file distribution/partitioning/sharding across -Machi clusters: in a single Machi cluster, all files are replicated by -all servers in the cluster. The "cluster of clusters" concept is used +Machi chains: in a single Machi chain, all files are replicated by +all servers in the chain. The "random slicing" technique is used to distribute/partition/shard files across multiple Machi clusters. File distribution/partitioning/sharding is performed automatically by the HDFS "name node". - Machi requires no central "name node" for single cluster use. -Machi requires no central "name node" for "cluster of clusters" use + Machi requires no central "name node" for single chain use or +for multi-chain cluster use. Requires a single "namenode" server to maintain file system contents and file content mapping. (May be deployed with a "secondary namenode" to reduce unavailability when the primary namenode fails.) @@ -479,8 +475,8 @@ difficult to adapt to Machi's design goals: * Both protocols use quorum majority consensus, which requires a minimum of *2F + 1* working servers to tolerate *F* failures. For example, to tolerate 2 server failures, quorum majority protocols - require a minium of 5 servers. To tolerate the same number of - failures, Chain replication requires only 3 servers. + require a minimum of 5 servers. To tolerate the same number of + failures, Chain Replication requires a minimum of only 3 servers. * Machi's use of "humming consensus" to manage internal server metadata state would also (probably) require conversion to Paxos or Raft. (Or "outsourced" to a service such as ZooKeeper.) @@ -497,7 +493,17 @@ Humming consensus is described in the [Machi chain manager high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-chain-mgr.pdf). -### 3.3. Is it true that there's an allegory written to describe humming consensus? +### 3.3. Are there any presentations available about Humming Consensus + +Scott recently (November 2015) gave a presentation at the +[RICON 2015 conference](http://ricon.io) about one of the techniques +used by Machi; "Managing Chain Replication Metadata with +Humming Consensus" is available online now. +* [slides (PDF format)](http://ricon.io/speakers/slides/Scott_Fritchie_Ricon_2015.pdf) +* [video](https://www.youtube.com/watch?v=yR5kHL1bu1Q) + + +### 3.4. Is it true that there's an allegory written to describe Humming Consensus? Yes. In homage to Leslie Lamport's original paper about the Paxos protocol, "The Part-time Parliamant", there is an allegorical story @@ -508,8 +514,8 @@ The full story, full of wonder and mystery, is called There is also a [short followup blog posting](http://www.snookles.com/slf-blog/2015/03/20/on-humming-consensus-an-allegory-part-2/). - -### 3.4. How is Machi tested? + +### 3.5. How is Machi tested? While not formally proven yet, Machi's implementation of Chain Replication and of humming consensus have been extensively tested with @@ -538,16 +544,16 @@ All test code is available in the [./test](./test) subdirectory. Modules that use QuickCheck will use a file suffix of `_eqc`, for example, [./test/machi_ap_repair_eqc.erl](./test/machi_ap_repair_eqc.erl). - -### 3.5. Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks + +### 3.6. Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks No, Machi's design assumes that each Machi server is a fully independent hardware and assumes only standard local disks (Winchester and/or SSD style) with local-only interfaces (e.g. SATA, SCSI, PCI) in each machine. - -### 3.6. Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device? + +### 3.7. Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device? No. When used with servers with multiple disks, the intent is to deploy multiple Machi servers per machine: one Machi server per disk. @@ -565,10 +571,10 @@ deploy multiple Machi servers per machine: one Machi server per disk. placement relative to 12 servers is smaller than a placement problem of managing 264 seprate disks (if each of 12 servers has 22 disks). - -### 3.7. What language(s) is Machi written in? + +### 3.8. What language(s) is Machi written in? -So far, Machi is written in 100% Erlang. Machi uses at least one +So far, Machi is written in Erlang, mostly. Machi uses at least one library, [ELevelDB](https://github.com/basho/eleveldb), that is implemented both in C++ and in Erlang, using Erlang NIFs (Native Interface Functions) to allow Erlang code to call C++ functions. @@ -580,8 +586,16 @@ in C, Java, or other "gotta go fast fast FAST!!" programming language. We expect that the Chain Replication manager and other critical "control plane" software will remain in Erlang. - -### 3.8. Does Machi use the Erlang/OTP network distribution system (aka "disterl")? + +### 3.9. Can Machi run on Windows? Can Machi run on 32-bit platforms? + +The ELevelDB NIF does not compile or run correctly on Erlang/OTP +Windows platforms, nor does it compile correctly on 32-bit platforms. +Machi should support all 64-bit UNIX-like platforms that are supported +by Erlang/OTP and ELevelDB. + + +### 3.10. Does Machi use the Erlang/OTP network distribution system (aka "disterl")? No, Machi doesn't use Erlang/OTP's built-in distributed message passing system. The code would be *much* simpler if we did use @@ -596,8 +610,8 @@ All wire protocols used by Machi are defined & implemented using [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview). The definition file can be found at [./src/machi.proto](./src/machi.proto). - -### 3.9. Can I use HTTP to write/read stuff into/from Machi? + +### 3.11. Can I use HTTP to write/read stuff into/from Machi? Short answer: No, not yet. diff --git a/doc/README.md b/doc/README.md index 3ad424c..b8e1949 100644 --- a/doc/README.md +++ b/doc/README.md @@ -66,9 +66,9 @@ an introduction to the self-management algorithm proposed for Machi. Most material has been moved to the [high-level-chain-mgr.pdf](high-level-chain-mgr.pdf) document. -### cluster-of-clusters (directory) +### cluster (directory) -This directory contains the sketch of the "cluster of clusters" design +This directory contains the sketch of the cluster design strawman for partitioning/distributing/sharding files across a large -number of independent Machi clusters. +number of independent Machi chains. diff --git a/doc/cluster-of-clusters/migration-3to4.png b/doc/cluster-of-clusters/migration-3to4.png deleted file mode 100644 index e7ec4177b7ab6f7802ea39b60f20d872854d8ba9..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 7910 zcmeI1cT`hPo5wGTNDz=8NRuXA>Ae?0kPgxeEfnbjQWAReM-xFn0-;D%il~GjT@WHg zBS`N>x=1I~P(onC``bNd-#z>8*>d*3H|ON!-nnyUCeO_Cd_T|3^9P1nR1`NU005xU z(bg~lfJX#q3!c7X*t_&Ebw z5A<(oxp_ljP)9#^AJ{E<5pfx4voZ_-m_2kf)Xai%H>aoTir&-4IaGaXJh$#u`Tkg@ zr{D_1gD-g2{grGc=?J&yHwj^AyQ{!ccw`Y;wV)4v3_NYOX&O80TVnzh_w?+ZnrSrc66h3eK z-&9*F@K#mB9rvAO&lL16;|cfMH}D-;?gEPlqbS#~#Bx1#r8&V3X9LKNxE6`)VxLU- z!7=mjFkkm6tt`LX6RWgr3g3Q{o?e_d=vD2UoW(n7SG$R^<-JeHGAFwUt2?ci;m!OV zSdRI`t2}NK8;vIJJqKeL=Ha#~Tn#B-2&RCc-7vN_fr^~nB6ari%}f@ ze5N~OtHXP+VU3t4)LNhgtp&8Y+XD&b1d^8~i^($zL5JC-yikwaC7L4Ayny(PQ-kUy zEcln#q-@uQ*uclJeWpey?9rUM1T~2LE^h~hbjUX#CM)AZ*Owq~^rAgm+fX1Jf*5sS z+|v0dC>8aN^m#*H8x6V><)bnrDlHP8S2|G!8`GbtLJvB$v3EISQd(9kso_{=teF;e zMKErqHmh79o1 zc|F$uk&|~vmfy{RrFh{tjh9UQftKOi|v-IBkiuR0zWeY0R*N z(e#GliuL5;_SG8(GQTk^qu;dL8wiB?9}-^0ChJt1nb{@>T*Jh;Jq$=ZKZX`v&HTVM zARz>KmY4Ce>uAIPDOhro#rX97K$$JWa`aSvzZdAlaW?f3TGg-285gXXfnn^maCc#s zB}|lNiEM7|1m;ZJ1$z!T2qVbH!sfq})K}$Do_+*7geTOT*PV}Kw@&SWgu&e?b_|DgZRP9pRf{%k@h(GC53!Tiy7P9XD?%E)|;6RXk10cyIfxH3acX|xl&M0haYtp$7;(f>}hC!vVqtOQ58q; zg^ukN5ST`jY;nl_zNyP!%wv|D&0esp5*=mZF_Husp8ALM@6%Zn@bpTF9_Y=F_#(j@ zgO@GO5Z->!mHn)7{@@17@KXVzn~6pHT+3%Gk3jki)$rvuF3K_e7yj1J%LK%Im?aZd z9G5*@Ne9gsrJqp%l#J8H4KS8fpCjgKM9zLF(Qhq8Loc86*4IX;8Yf4SJca0Oc@q5I z5Fcdmf4&C}--Doc!26AO{AX=k{_%lPCr6Dbx)XiTxe&h+`R{^!c=)K(J6udb7xkQ~ zZAiJ3RNP>@{b|8P3%W;@-$y^jqF_H$s)~2RDq-E|NO@{9Kv0>TX1#hlwdkUfa5!~W zzq`Lt8P=5b>gA`1Uu=n$>fZ~@uzjs&$tleR^)v)}V7wEBl6a_EJ zZ)&O4(}Z8yViPG&k$0s+_&6eaNH0wrdpzFbh873JFcT1VP%(Mnn3lmie~sVTRFPRN zy%ytG>Vm%9=EWo*rgEN7MK7d~mfn`7dU&~2%Uq|KfZ zt7UT^MRfU&i~aEF;|d?R@(hM7MbrBF{1m{`$7N$*+644Cu9Pf1z6L)*`Ny*-C>&OP`+Z7O~nFQ~H zoIl@}$9Z!>T}<%58bxJH6`%h;Mz|qDvD_+kQ?h=UtVzggfeUZOcnkdBH+e!=TyzWR zJa1%k*WRO2MZ>2%>9>wuFb_FQs&0Osum(*zJFBl6n&J-F{wSLgd+>EQ=DCoO@YJ+W z^A$*SBx5$N9K!j##=WavZ9sv}K*}>cB5RaTW#ktlOW>Li4JErYE#5ymtiT2RW+J#r zV}CoJQNC;CGC5*ZftJ8Q;`+co795~qhZiGlWaF|<#v5J&qz3GazljVw zki!%;>MoYbe^>ZQPs90108t43N94(cW@h7~A=Zv5AGdLH57eT@Z}u<}m2=Vl@72^< zrsCHO8)z$;R4Z|vLhlraX;7r?7+paT<<}sjCA{H$3FV0s^+Q&N^8^9v9js|5f)jlB z(4CLcQ2j5LXqD~;#`DDU?IlN-;I(L_+!TpI&c+G9W!3XRa2|Q5O88(;>qyr*NFSF~ zsHe`O=3xcCN2~E{VtG@~>*R3x5Ko9euaeBBC7*7cK?JNzq%>%2bXso0Zz**6JGaqh z?srVchd9FG+Z^x2j3KpN+{J2oqCKB9@ZV1}CViWs~u-1b~t(G(Mw zdb>8XLPq6xV#BwQ?JUuVZbosBwQ!p^sc*d|?c(9Me)$nTS&iveV?W<;#OX^%~=?url0QOIHfuhDw zt3sz+382Qrdrs0FqkL>vuPJ_V5LG0guiGD@}`$Q(9;j&^@_AX%Wa~5?Gy;y%598Cm-=^;GkYWelMQCH<8IzG_a zaBZQ+Ef?;)IBMNjuMxYRBLOeu)_{qJ&#(iPY<~}pHwxH)DX0Svd^gRn?_bd6E;{jZ zU3W6yr^o_=j?I&OTE5nSQe@oNn}0;2_pxb_)f)(&d1zWy-*k?+>&uIAF0c&KTMxdy zeINAGiwu{@z%{;1P%_ zMwvm;Mi(9XFq;GdCmm`Hp!8h@z$Z{;PUB%>cJZ`DZw3X_fot< zI_l~8R$OA`B-IYwg>}T7A0t`!5!Ka531M!|tqOF-F!m%^U#H+$v(3D@#<{Q8+|C-! zqXwq-)o}Nvqtl;Ga~~({iatg#)Jt_-hK7u3>z$jrXxG1v>=Yvyl-d2tY$ zfAeHjWvu4jy&C?`oHPIR^5^E(rilC+ZW$6#KE|qa@shL0li8|eP2FT%Mu4PsplUmb z?)11DnOjqe3*U-ujCQ)8RW>zMO;mZ3+ZM`g`alx@&^ahlcoa2dE&qtUW&QjdUnp4r zdj%OCSa7J@pG3Kn)$?*mU+Zckw#8n#*2sQR$ldZ7Pan*ULy9Tt!zxE)M+fEBk#v4T zwEC**@zG^Y3jF;wHs)kAdNrQyjP;d6SS~?D>6CW87-VEqc!6*Y-^%?s?G6Ek3dHI! znaSMYlx&43M`pNZEn$69g!Sh} zz;L}Yn9_I1f269`V+o8?3|3TW6~6w4*)U~8bGDaW1iCF6hbmV%9<~9u2u453BIPyIg2`Uj8Z$p)^X(PXJHHgfbme|ucG&;To%_&Prlz#vF_LmI zktij#&nT)s|%3&;MkC; z1gK(TA+B?FZDj66{%il8`&3rDYP76KcTifPS`nR)!Gdp}v;rZ2bkfceNy*SsTM6=l zDv#HWw`?Sn2EK*c;|d>5%Ktdaewe)Sb-T=Jc+BPzdRB9$-Q_x?A5VkgSJLXmY}|s= zKKR9>v>L?=G!H2HD}NPwq~#t+tU?| z@J<8r*}tLpcx`bC{x7gzwhpQa7YMZ|AFPjmV#G{+>p1IS46-<--U^V}I zLj3%!gjg;}brO8uW-CF=FiBsHv>M*rs5B;GNssKn0fPn;AbuU`fayFMibIY}p@vSLQ6%(E9WyMqq8XlYQW zy!&$s(HqpyK%$}*tYO#CY?xNSKJLWJiU;y?pHWz(Ay+@h%NOHm_FGudj~L}Q=69YT z?)$J(0s+pIWpYGCu1wsQ%j&TvC91bQi_A5hnEwbr|F)G-9IR}Cn~_fGL%-PKDwI>* z%`X3}Vu>T8oVRXFT=#V^#!l+=Xgj5Lf)qxZNPa&HVyZpUU41C?+H=tm>&Km|J1dru zVr+WoQFLZII>B-vC0v2&bUx<@&VJP;hCIx$aQ2Iu&?&aSzUscdqx6`7*hrJs`>OQ* zxNS`}-;jC{vn?*RpUWcI)?loJkUw@7SX}hdAB1e^MdFH0wjq8f1ql z=m&@$*Hm#oKlwh-SPm;#Qdw&HZs36tc&D?FMVGDj1{A2&J{Fr6nwu?23zJ0#u67Gu z_q9vwoq8Mot<3u@%d{b9E^36&(`P^|qT78jO<(Zw&j_AXv$pKRv~`-E0333oDf+X% zFv~N%Vy%3tGyjb6f0?FUZIPnP#!D6_?ztfRdi| z`fv9)y9v$r&A(wW+}Y0#iKa^!5LgS)+Z30`)H7hW3`mHmNvW3NRKtXfF)f^qY_lt4 zTEiO8jLsj{lewZ-?n%<$=wdy>?XC*I{`(%PA}8 z28+g><&J~4umwongtzLgDMqP6w&inmxD61FyyevFEL~ z4rbi&=gISYtJ81*QaqafRn;C-J$5s)hCj>jLsj(<|0G19M}HO30wT_l())gKmI$CT zHFqGjWC)RBBX@ZMB)>_@whj^d9QLb?xlCVAwu9R_BXyB9U0*BoRb`M)!?~LmFC`<)?u9V-6qP)Y= z$*jO&6)?9AnI|_Isn;G#`8j&y9vqu5(!9@|DfOdo`Iv%FCdBU?i#lcK`CF1xLUt%u z9LD47261aso&HKy6PnuilWZrylxgr}OZMUwZc7c%{a3_4J^0nw>MnwFiCg~OkUIWM zJfcOWbR_19N5bpOS;Bih0UHU=eJg@LtNu)}z&+a$k-QE3V=5>J5il;5zo(tNrqS*s z8}BOGUdm3!qWozxIP{nnMHy^;5OLa4!gqTk#jb3o(adUfr!mECKuB+V#oR}EJj6lJ zNt)r&>!b$?j(xQE(;`_3aFOZRv4SNQ@m*x|=_umf%wf=k(AA_c56^vtQ@j#+jizQv zge8BKh1;52M*nWgA}lx-Oq`xL9m8}9L&^$CB|PX&faF9UJ2$DymhOivFyX-4mS zgx*eFxBa+Y;GRnIFEKb4A3Ao_Q(GDKw_q>o=nJe zf|g4gTkZ;C@MVfO+q?bnCA{LmOIe|sMvUgGvs~e(exgM=J0&RXf;UjCpMW8R)O$M5 z{S%w|>z4qgggxBfC(8e(75`{=x>PWK9nm~wydJ*)i+kQh%|9T+2q*I(p&j+e6|`vh zYz%?HW0IJ3LckqrxW=#L1yk-n-llAsK;Wr6ye}5>!dS~|0w=}{KH?s9N7urx ztW>nx$qUa$i89QS>p^pRtyiPrm=EOVY}U?mU7Y0@@6=w&%8n8OBHnHf zqf9_bJU8)*SApk!+@Idy5pZ0|T6Uwm!x7wAZFh2n)rmdOSkvO;aILBW^#1kcE`DLmgso@7*n(hda0G}Y|>q3eRu*lj)hdbXrbeKQgl zs!1X}vQhf(@*-;}tEVZnj(W6C4H2yx_@33cCWGx|V{?W6c&<4g=cf~GiDzrGk2W8w zDA2BK(6_0C&whyb_7zEeO*=1=Rez~#;P9%0Lg|?z1D5nEYFM_(qkb`BZB~_`_r;sc zSwXLuUE-Kvo{&HCUuKEXK0iSTne!RHs#t1<{Mf_YKRoP`Arqh|m#?d6Sd;I=EDq0C zI3H@qRA(m3n(~JXz%*Yi%RmKaY3h&c^@${+j3`v<7-1bhLiLG zj|7B&`Nn*aTOOlVS`gp{5)4(S$v_Dd0Av{P!PpEabF98N+#q67;c6sP@v87PZ9fx0 z{OD*JYE-D(M?L@lKH(uv`qvYlkya3Ji{^`%|9M^?FGUm&-Ts)ea#!NBcf3V_#+r0|y@kREpBDI5hn*!}0je;VPRE8(B+@PCd8gI9jCUh($)8J6|$ HW8l94W&Tc| diff --git a/doc/cluster-of-clusters/migration-4.png b/doc/cluster-of-clusters/migration-4.png deleted file mode 100644 index 3e1414d4296292a582f8b9ee91bd2ac3016ab8e7..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 7851 zcmd6MeLU0a|GymPRHr&BpY9MMIUOM}F}78TA|!XYi6Vx@F!wWbI;BYK#!b1~afHGw zu^6`L2)WxRCU+Z#VGLuonVa7m=kxh~zrWw_@89FM$79?3y56tb>$=|W=kt19kFQyq z@A+B&XDKPEJy$MYye=iRjRsto?Dz@z<>Rk90Ds$1*Uc|TmC_U^fyU0D%g!h%somBa z-)(UbPt*Vbx}W`x&>L1)!CuG!?b~;dp5EG#0YQMYl$1#%7`P1Z4!x}s8E`ic1&)NM zZc2cG>y2(5Rh3PVPy|HvhSfC{Q)Gy@3P@XD`?M*mmPDDh6c7&cbG6b%pYh+}kbNYrEBcEMz}*DcSy zKeu*?zwUEfeY=c`)V3YxhO}g2D-1F`deaWF`ea53kP+@nIA95Z`wFR%ERtfY*B~|l ztEEncPJ(BC^9t}2f6H%~>QJex?~T8|YpeBa=UkAFYRRz;n^Mwyq)oE+7*=h1kdl5a zCHvpiI#w$E6@sgjtp8j>zHIoo%4*wlv&1sG2cmeWpL_gTwZ_i^$=~Gt2O8a_otRom zG^SnDbDGYd>!;!UHf!u**X$Gxc3^^4dFeN+U)ac-?GUYYWC#DY)y(Mo;?d3c zQrpE#zkEnD?`?1SQ;^KXc(nTdC*?b1Mgr%9XwU`)4wn?)r-UuMd^&L7kbA(szp3eU zg}j5Z9Ht!lMwUk%6F(oh^dg$@6fIzu!_XXDt*c2E9H%YbOmWCoCelzjEK6cg&anwB z`i#t0{q$;Q5aG-|M){7I87UsSYU2i%&;+4l@H518)SzZzPUiYply`ln8g`nrlF-!=XX>mh9M@ zDw7sazf$?U?E;iT``hxrndaIf-C0>SlQ9FcoD%`IX|F{uNY1oX;*}3Z%VYYhZ%QBe*^#@tD zWKJeuXZws%iJFzab%EEjgHkPnjG^kC@j3PDGBS&3LY(gA6 z^jV(5%)Q>P6){tnRv#EpytFd)qf#2fK0I(FdGh4No{NN%W<<}K^-DuL@^a9ffFM{| zP*}4U(fg)3Pv2PsJ&~$2?(8(BiQt3zT@(wke_vt0d6oA;h@xvonhmkb!?x&kSAb{1wn{wYpZ?BKqp2qpgr?Nx2@p|bacSo$htM<-=kQ=BvxZpM z)5+|ht3R7f}Z9T=0c?L~}I6bv-dbPo*>{NIa5_sTzgh>V6jf7T$}h>_W^W z^ouLr2BpN)=6rM1=3OHKTI1d<{Z`hx-g;i|{N&f8;!nAXdJ@$h^3DlsMUvC`b5XQ6 z!4*w;O`VM2ZZnbf)V5v)F89mk*v4%vhar}>rBZ9MznEAyp53vbCHl# zlsy=Bv`e!a=py{p_d0^>iwu*e&vns49}I4E7QmSxxFdHE6p;m6Blxq2EYgo_ANAP# z-EzoUSS0u%jsA(L5EL+w&2vq@I>`yLs;OShljWo;_w_X>K>pN3E6wa#)sc3}=`2AX z$i{U>jFzm{f%BLOvqPzh@WDz=bnHm>D}KB6G7q6# zPaZ(nE}&dOE<3L0lONZZ`JtN>om@E8WD`I&yV}$apF{iV$hY3QqWC=FD1G*toh$m*S^Nb>QYq zr`Dk8fG{xuHK{o#9xtu>1IJQ7>Sh&>4g<|cbk$6`&j(T1&@#&nEZ_U;VtfK>{}+t{ z(v%I-=1x=2luVUVm_g){TooS>vO_Od6F1yn6TyDX~Mn^OlMVoW-vjN zTyA#6c#ER^c`lx1xa%4-(0}!D+q!+vS910-$@;VE}1+`4?+w@+rK`kD>_)mJU((j{KWe;!u`tyFwRDMA)`m@-j%n8ekdxz6n z?_OQ%TF5bF^`%Q_CUUdQ2!9waU5ooBSn?WDjLoqnKVV3b!defl)x?OQe{pK*@2{{a z@5SIyJ*vH4$T*m1*A4(JTv+1%(QK9_u^UM9sO~mrnJ@PB6-$!$ZGvp}M@CC|{!6M8PU2A8s6`Deeo{6&;mqoE^2rVJ^(>t&U-~n;1=sUI@`|K6 zewp4f9OkmH55wzO_!v{_8l`l~Q+fHRyPh)MR1Sr~k#xH4lWWs%dO>$hVwXxxipoJt zQz)v@Y#NN~ihnTw+f@4qnyB{kHr%ub~MnI z{H~Q=Xc2ur!lROQ>2B6~j^2^mqeOca+jaG!wQdf)0zNiZjTgVAq;gME7)FnKp6m71 zH*?=@AM+C^$WKC=Cdhn~q8d;UFO6rl#&CYz(}@bnDTHOYm+0#R6csEar|YX{@)PL8 z)q@MwgMvh-J=o`X_~J^AT0iD!l&NOsNQL9gULy32S>V{WR|}*Ql2Z4iOL!+ic_Do4 zsiF}RQ#A8Yi5$&63ZeQJogI5NNF3f(a~DbvcA+a}Cq%b_++Jdw2-yzBOU#^9SP1%; zN@I=68u$6BPOm%MBM>U*OSp$Qv!kWJ5bxj=j)!AyZ9w|t@}1rY3ehWLZk`a$kt4?jOz~1u3oQgd}OEHYj0`fjV02FG=qkjBM>K^sn|{K zV}|vzX48Z^WufPJ4uruelKUuHB2K}{6~pQ7eN0;;P_%Y$iYvFHVw}gF@w&(n=biP? zwE5T_`qjh8OviI747)TvpHXaG!gl4p)wIe|y!rg(ULKAC>EC_n2%Qy<3# zo;l)^oC6aU$d~SPwbEE+h&VQ{@Q+3hb)DvnzgVQ^BlLxeN^-GceL}+*-n`mEo(dW< z2BuK6OwlymT)V6ovc$=4bmFn?T1QiQG@rze3>14!i}^RkiW|JCjGTI}dL7(2@o+CB z_Ks56)A8>`=?D?iqO1?4WxbS#DaKCR)=c*=@({eeAw;{24^)DpryXsKDi?F=x!K+2 zJsks4U&0$-aeZe!he5=3MLGF~W~W4<8=e&Mpg;rb|6Ad*eXRsixsr%TPEfW!kn~gS zjV}ayLedExb~LXfhT8+Pmv>Zj1#sv=o99=XLe@O>Yk; z^P}3nxEZDDczW9fiak@Ts@*JICM#&w3>N{!QT(rT^mB-rpXtjmKFK|7^jjZ@D1O91 zO}`tP*F|=x*@Rh?@sdcM`qi)|BtF%Kbb%1_KjXm9&(**COWv}9e}doV6y#FT)Z$CrGHAjytZ5bbeJqM5&29%FQRO$C;I>(UeU(*g9dg z&>5YGpP_!7bvZgITw+J1TIR(VMTp9+`VJ$|D63gFvhS&evFNB7U7`v8FcyYT60i|T z0mT$g1p3_!In6Fw;P@?|H`P3ki{ky;5j=Jo#_l@d*%hH*Kw5#cr|Iv5{c!PQ#P=?E zJIvpS&{xSInt-SyYNZA5#!7nOW3MqM`Vdc+8A=stW=oyABWNM)uViQ!+t&s}Uhc{v zmQL2YfBls*jvP~%PxZ%swlr?J8H~5x4S1LUrBhS$#*#$N5}j^aoH|P>GrF5XEFCKn zA5abkS!r<|VH|m#DDa8`9P#oOdRSk{ioAD3KpP^ns25esD8)v_aEtAYxu#rQo>R_q z-6k||z{Z{9ZZ;L@N5*zA{ITCF{TVByLzDK#W`6oD8u-S!^cYahNK7c+zc_FrB(}%0 zPzJ8)gO-Wg*2|V=$#plq-RT-xdW}22O%!AS|fhx4B&EG~UW6NqK_ zKQirtAL!l8O@K`2?AK!}rQV2Hc7xrdejc^@Sja3ktnV9KURG<*v|j!R-F@#;@yfRG zpSFMWO&Z9%Bs)GHKwfpvUZWc#V`0AkR4?X|rKrKN^<}i*S!a5Tl8ht%hEy+1;`=52 zvj71oMW3DAy$N;)kJr_chH^XhZ^B}UB>wS}(L0D|^F6u|d4arj4|!l51cYsD5~bN3Goy9w`3{M%v23bt0~ zyUv!Tl#I6evvtcWD~(YnV-vTG4eqpGcO{A+kR@g;CUp(@Qx+E!cp>WuCk(gHlFfhJ zdXZPjI5JKvrUo;lL!=o%;UGpjW^G(gT}{SkpNKs?`h)Q(4RKXhmKHC? z!wzH0s_LICRM(&EWXtWWfTB&ZY)s}W3K>6MzMck{k&#||y&(Rh>y1e2Ud+Sx1id6! zhmQKOIk0pL&pl%FuE#7$$Al z!f{92@j1yx_;Q-0Eo$;kCKi@uGpHi@O>Rim!dtu8#facw^N^Bb_9v&@ag~}k$NCb} z7bfBos`diIL`z+$raO0o$95y`@NN^bm8ro@_R7Ro&E()QV6MNM@AzqRt_SV|vkRd; z*4`pn`+)gOQj`hUD#Snlw!~y-{r{@iu&&HeC3kKma_*Lq^hzUX;CaX1twnqlSj1!B za?frp;-^5GA?6oMw-)gkU=gRzH7`BdCk(s*&10sqVV<-&Tb;eo6lK{cU3H3U;ybr=!e zbDDta8#xqv^Dy)Yj(mk!Ues$|gCBd>Yra6W@?blb3+IOQzMH-)>8&RpWQXI>V;MIy z2-NgdocHYT(r1Bh*I_nJ#!Q@zUoSU3X^VK}ztu4{ zc@fd~?UKBbEM(bA$vvV~g+hge7-T&p7?Lp{%hi!qTluJMJ}TcV?EbM_O(y!4vQcKv ziurP!>)m7ygH!x@@(U@Af19gz&Dazh=vS>|ZPU0C1-$Nt>uYI8PzKzrhUeD&ycUvD zu2}qiU5OU+h#VjQ;)~{zZO^{Mg^bAtCl@b*U`tSGc+g@Nbe6lrA*t?aP9DijfXCGK z_SBb|+j)taVC<5eWw86_COg|csvVj-lEUcy5nfJno}M|5yFJ=$07aoEV^8p?EW?Fk zOCP_=_2fqP<_=jXll5 z1yhC!QkB~AvV!N90_iu(gR1Za+m*rt95)+NV@PmM{ktEMl|Q<|g2<)~BrQ5x-{_&> z&^C?}824V_OF46gTv}o{76?4PjdC zvk7*8<@^w41uVrz1)sI-x?f@0SgdXj_yZ@aR~g|)iKzFN-Bz}!=Q}pPFcwrq)Aw(t`EkN8{F0^^D(lT z<=o0MAgECL{at!$!`w#>oci3h`r&b^#KdgvFj~KKojn|?f!ciTfDI%?PPFPoWVM%U zK5W1)>{noCb;|bBPn&Pfy>EftT1ot;-CM2yC->s++}viT$5}79J+a6#+YJy^Q5pux zQ;c|%;20-rk>RzlBuo_iX=A!ysh42&k`wwczFYzP`bvC4`OAX1NBr$yFxiuoW|LSO z-lKU%)(el?M^L_!cHr=f5DGn2MDMGeMLN>1=M+z+_#=CBB1>J-jV$wcYSGNlzmx+p zsY(xCSS{n{+Xh^Ta0y1I97RZ(Jt-*}>}RfakD%#AmDWgT=-#SC;GK2+t9jg`iatZV zh+=IDNA6Kg*l6t0PuRS+GITcZ+fj3ATyp)?U8;)h{6ZE&>O&MmLR-e-&pDnS-TnMwR`k zI(HMbx`m2u$F>eoUNY8l+fI(PF7+b1=aml@9m4@U&caC^t)JK`_Qu*mD;rGs*sg1} z{8Z1emyUTA^4_pBPsd+S7;L6{iW8e@f@a%0u{%xFq0fyu!myJCjD)6c{qM*2%7}_x zc%PJI9L3u{SSZIyP3d1;SA$g)QHnNZm zm0V>*a&)Icw`L$c(i;z1J=qAH3LUl-GCE(2Vp41-Ae7H`MwOxYH}1}!sPpLlB`a=r z`&H#H_qsBb`On>QXC`yaxa$rBO?wOoz^0w4ie@$}8izrzQnioBfg#W1QhDL2(zxcL_??mu z-*0d*ZPpH5$m;{BnlY^jc7rI3`hK&=f(ypczYG>}p#*Mtdv}squh*&G0akJE)G^L< zyK7><9Z>k|kG?VbsxR87)Fs(g;s6nJO?HabT;T344j-bR<9qks+bAGf?rJo+$Jk!H zcuZ^3vihyd*P=|9CoB1p4JrJVLXno$l6h@^3gM2X;GjW6SG2FV+{VarQRxnSxnyTt zbAiaD+L1JPa2cQFOnBwi+LM1+*yBH5^@7H&ahEKBbt95v@@qk^wCCI|KBwvi=llCf zO`zmJGUI^=YH+ReDP772p)sb3b!+afs>HON=-nZs@?L-2T2c5UJJ(2~E^ku*6>uno z15TC1C;44RRqOJM{@A4}qY|VF2rXT#dh>9Ty>j*qouzTIPa1C(mE?A=!1i6WEn7GI Tv~S}C`N}2hi=`KC{r bin, where ~Map~ partitions - the unit interval into bins. - -Our adaptation is in step 1: we do not hash any strings. Instead, we -store & use the unit interval point as-is, without using a hash -function in this step. This number is called the "CoC locator". - -As described later in this doc, Machi file names are structured into -several components. One component of the file name contains the "CoC -locator"; we use the number as-is for step 2 above. - -* 3. A simple illustration - -We use a variation of the Random Slicing hash that we will call -~rs_hash_with_float()~. The Erlang-style function type is shown -below. - -#+BEGIN_SRC erlang -%% type specs, Erlang-style --spec rs_hash_with_float(float(), rs_hash:map()) -> rs_hash:cluster_id(). -#+END_SRC - -I'm borrowing an illustration from the HibariDB documentation here, -but it fits my purposes quite well. (I am the original creator of that -image, and also the use license is compatible.) - -#+CAPTION: Illustration of 'Map', using four Machi clusters - -[[./migration-4.png]] - -Assume that we have a random slicing map called ~Map~. This particular -~Map~ maps the unit interval onto 4 Machi clusters: - -| Hash range | Cluster ID | -|-------------+------------| -| 0.00 - 0.25 | Cluster1 | -| 0.25 - 0.33 | Cluster4 | -| 0.33 - 0.58 | Cluster2 | -| 0.58 - 0.66 | Cluster4 | -| 0.66 - 0.91 | Cluster3 | -| 0.91 - 1.00 | Cluster4 | - -Assume that the system chooses a CoC locator of 0.05. -According to ~Map~, the value of -~rs_hash_with_float(0.05,Map) = Cluster1~. -Similarly, ~rs_hash_with_float(0.26,Map) = Cluster4~. - -* 4. An additional assumption: clients will want some control over file location - -We will continue to use the 4-cluster diagram from the previous -section. - -** Our new assumption: client control over initial file location - -The CoC management scheme may decide that files need to migrate to -other clusters. The reason could be for storage load or I/O load -balancing reasons. It could be because a cluster is being -decommissioned by its owners. There are many legitimate reasons why a -file that is initially created on cluster ID X has been moved to -cluster ID Y. - -However, there are also legitimate reasons for why the client would want -control over the choice of Machi cluster when the data is first -written. The single biggest reason is load balancing. Assuming that -the client (or the CoC management layer acting on behalf of the CoC -client) knows the current utilization across the participating Machi -clusters, then it may be very helpful to send new append() requests to -under-utilized clusters. - -* 5. Use of the CoC namespace: name separation plus chain type - -Let us assume that the CoC framework provides several different types -of chains: - -| Chain length | CoC namespace | Mode | Comment | -|--------------+---------------+------+----------------------------------| -| 3 | normal | AP | Normal storage redundancy & cost | -| 2 | reduced | AP | Reduced cost storage | -| 1 | risky | AP | Really, really cheap storage | -| 9 | paranoid | AP | Safety-critical storage | -| 3 | sequential | CP | Strong consistency | -|--------------+---------------+------+----------------------------------| - -The client may want to choose the amount of redundancy that its -application requires: normal, reduced cost, or perhaps even a single -copy. The CoC namespace is used by the client to signal this -intention. - -Further, the CoC administrators may wish to use the namespace to -provide separate storage for different applications. Jane's -application may use the namespace "jane-normal" and Bob's app uses -"bob-reduced". The CoC administrators may definite separate groups of -chains on separate servers to serve these two applications. - -* 6. Floating point is not required ... it is merely convenient for explanation - -NOTE: Use of floating point terms is not required. For example, -integer arithmetic could be used, if using a sufficiently large -interval to create an even & smooth distribution of hashes across the -expected maximum number of clusters. - -For example, if the maximum CoC cluster size would be 4,000 individual -Machi clusters, then a minimum of 12 bits of integer space is required -to assign one integer per Machi cluster. However, for load balancing -purposes, a finer grain of (for example) 100 integers per Machi -cluster would permit file migration to move increments of -approximately 1% of single Machi cluster's storage capacity. A -minimum of 12+7=19 bits of hash space would be necessary to accommodate -these constraints. - -It is likely that Machi's final implementation will choose a 24 bit -integer to represent the CoC locator. - -* 7. Proposal: Break the opacity of Machi file names - -Machi assigns file names based on: - -~ClientSuppliedPrefix ++ "^" ++ SomeOpaqueFileNameSuffix~ - -What if the CoC client could peek inside of the opaque file name -suffix in order to look at the CoC location information that we might -code in the filename suffix? - -** The notation we use - -- ~T~ = the target CoC member/Cluster ID chosen by the CoC client at the time of ~append()~ -- ~p~ = file prefix, chosen by the CoC client. -- ~L~ = the CoC locator -- ~N~ = the CoC namespace -- ~u~ = the Machi file server unique opaque file name suffix, e.g. a GUID string -- ~F~ = a Machi file name, i.e., ~p^L^N^u~ - -** The details: CoC file write - -1. CoC client chooses ~p~, ~T~, and ~N~ (i.e., the file prefix, target - cluster, and target cluster namespace) -2. CoC client knows the CoC ~Map~ for namespace ~N~. -3. CoC client choose some CoC locator value ~L~ such that - ~rs_hash_with_float(L,Map) = T~ (see below). -4. CoC client sends its request to cluster - ~T~: ~append_chunk(p,L,N,...) -> {ok,p^L^N^u,ByteOffset}~ -5. CoC stores/uses the file name ~F = p^L^N^u~. - -** The details: CoC file read - -1. CoC client knows the file name ~F~ and parses it to find - the values of ~L~ and ~N~ (recall, ~F = p^L^N^u~). -2. CoC client knows the CoC ~Map~ for type ~N~. -3. CoC calculates ~rs_hash_with_float(L,Map) = T~ -4. CoC client sends request to cluster ~T~: ~read_chunk(F,...) ->~ ... success! - -** The details: calculating 'L' (the CoC locator) to match a desired target cluster - -1. We know ~Map~, the current CoC mapping for a CoC namespace ~N~. -2. We look inside of ~Map~, and we find all of the unit interval ranges - that map to our desired target cluster ~T~. Let's call this list - ~MapList = [Range1=(start,end],Range2=(start,end],...]~. -3. In our example, ~T=Cluster2~. The example ~Map~ contains a single - unit interval range for ~Cluster2~, ~[(0.33,0.58]]~. -4. Choose a uniformly random number ~r~ on the unit interval. -5. Calculate locator ~L~ by mapping ~r~ onto the concatenation - of the CoC hash space range intervals in ~MapList~. For example, - if ~r=0.5~, then ~L = 0.33 + 0.5*(0.58-0.33) = 0.455~, which is - exactly in the middle of the ~(0.33,0.58]~ interval. - -** A bit more about the CoC locator's meaning and use - -- If two files were written using exactly the same CoC locator and the - same CoC namespace, then the client is indicating that it wishes - that the two files be stored in the same chain. -- If two files have a different CoC locator, then the client has - absolutely no expectation of where the two files will be stored - relative to each other. - -Given the items above, then some consequences are: - -- If the client doesn't care about CoC placement, then picking a - random number is fine. Always choosing a different locator ~L~ for - each append will scatter data across the CoC as widely as possible. -- If the client believes that some physical locality is good, then the - client should reuse the same locator ~L~ for a batch of appends to - the same prefix ~p~ and namespace ~N~. We have no recommendations - for the batch size, yet; perhaps 10-1,000 might be a good start for - experiments? - -When the client choose CoC namespace ~N~ and CoC locator ~L~ (using -random number or target cluster technique), the client uses ~N~'s CoC -map to find the CoC target cluster, ~T~. The client has also chosen -the file prefix ~p~. The append op sent to cluster ~T~ would look -like: - -~append_chunk(N="reduced",L=0.25,p="myprefix",<<900-data-bytes>>,<>,...)~ - -A successful result would yield a chunk position: - -~{offset=883293,size=900,file="myprefix^reduced^0.25^OpaqueSuffix"}~ - -** A bit more about the CoC namespaces's meaning and use - -- The CoC framework will provide means of creating and managing - chains of different types, e.g., chain length, consistency mode. -- The CoC framework will manage the mapping of CoC namespace names to - the chains in the system. -- The CoC framework will provide a query service to map a CoC - namespace name to a Coc map, - e.g. ~coc_latest_map("reduced") -> Map{generation=7,...}~. - -For use by Riak CS, for example, we'd likely start with the following -namespaces ... working our way down the list as we add new features -and/or re-implement existing CS features. - -- "standard" = Chain length = 3, eventually consistency mode -- "reduced" = Chain length = 2, eventually consistency mode. -- "stanchion7" = Chain length = 7, strong consistency mode. Perhaps - use this namespace for the metadata required to re-implement the - operations that are performed by today's Stanchion application. - -* 8. File migration (a.k.a. rebalancing/reparitioning/resharding/redistribution) - -** What is "migration"? - -This section describes Machi's file migration. Other storage systems -call this process as "rebalancing", "repartitioning", "resharding" or -"redistribution". -For Riak Core applications, it is called "handoff" and "ring resizing" -(depending on the context). -See also the [[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer][Hadoop file balancer]] for another example of a data -migration process. - -As discussed in section 5, the client can have good reason for wanting -to have some control of the initial location of the file within the -cluster. However, the cluster manager has an ongoing interest in -balancing resources throughout the lifetime of the file. Disks will -get full, hardware will change, read workload will fluctuate, -etc etc. - -This document uses the word "migration" to describe moving data from -one Machi chain to another within a CoC system. - -A simple variation of the Random Slicing hash algorithm can easily -accommodate Machi's need to migrate files without interfering with -availability. Machi's migration task is much simpler due to the -immutable nature of Machi file data. - -** Change to Random Slicing - -The map used by the Random Slicing hash algorithm needs a few simple -changes to make file migration straightforward. - -- Add a "generation number", a strictly increasing number (similar to - a Machi cluster's "epoch number") that reflects the history of - changes made to the Random Slicing map -- Use a list of Random Slicing maps instead of a single map, one map - per chance that files may not have been migrated yet out of - that map. - -As an example: - -#+CAPTION: Illustration of 'Map', using four Machi clusters - -[[./migration-3to4.png]] - -And the new Random Slicing map for some CoC namespace ~N~ might look -like this: - -| Generation number / Namespace | 7 / reduced | -|-------------------------------+-------------| -| SubMap | 1 | -|-------------------------------+-------------| -| Hash range | Cluster ID | -|-------------------------------+-------------| -| 0.00 - 0.33 | Cluster1 | -| 0.33 - 0.66 | Cluster2 | -| 0.66 - 1.00 | Cluster3 | -|-------------------------------+-------------| -| SubMap | 2 | -|-------------------------------+-------------| -| Hash range | Cluster ID | -|-------------------------------+-------------| -| 0.00 - 0.25 | Cluster1 | -| 0.25 - 0.33 | Cluster4 | -| 0.33 - 0.58 | Cluster2 | -| 0.58 - 0.66 | Cluster4 | -| 0.66 - 0.91 | Cluster3 | -| 0.91 - 1.00 | Cluster4 | - -When a new Random Slicing map contains a single submap, then its use -is identical to the original Random Slicing algorithm. If the map -contains multiple submaps, then the access rules change a bit: - -- Write operations always go to the newest/largest submap. -- Read operations attempt to read from all unique submaps. - - Skip searching submaps that refer to the same cluster ID. - - In this example, unit interval value 0.10 is mapped to Cluster1 - by both submaps. - - Read from newest/largest submap to oldest/smallest submap. - - If not found in any submap, search a second time (to handle races - with file copying between submaps). - - If the requested data is found, optionally copy it directly to the - newest submap. (This is a variation of read repair (RR). RR here - accelerates the migration process and can reduce the number of - operations required to query servers in multiple submaps). - -The cluster-of-clusters manager is responsible for: - -- Managing the various generations of the CoC Random Slicing maps for - all namespaces. -- Distributing namespace maps to CoC clients. -- Managing the processes that are responsible for copying "cold" data, - i.e., files data that is not regularly accessed, to its new submap - location. -- When migration of a file to its new cluster is confirmed successful, - delete it from the old cluster. - -In example map #7, the CoC manager will copy files with unit interval -assignments in ~(0.25,0.33]~, ~(0.58,0.66]~, and ~(0.91,1.00]~ from their -old locations in cluster IDs Cluster1/2/3 to their new cluster, -Cluster4. When the CoC manager is satisfied that all such files have -been copied to Cluster4, then the CoC manager can create and -distribute a new map, such as: - -| Generation number / Namespace | 8 / reduced | -|-------------------------------+-------------| -| SubMap | 1 | -|-------------------------------+-------------| -| Hash range | Cluster ID | -|-------------------------------+-------------| -| 0.00 - 0.25 | Cluster1 | -| 0.25 - 0.33 | Cluster4 | -| 0.33 - 0.58 | Cluster2 | -| 0.58 - 0.66 | Cluster4 | -| 0.66 - 0.91 | Cluster3 | -| 0.91 - 1.00 | Cluster4 | - -The HibariDB system performs data migrations in almost exactly this -manner. However, one important -limitation of HibariDB is not being able to -perform more than one migration at a time. HibariDB's data is -mutable, and mutation causes many problems already when migrating data -across two submaps; three or more submaps was too complex to implement -quickly. - -Fortunately for Machi, its file data is immutable and therefore can -easily manage many migrations in parallel, i.e., its submap list may -be several maps long, each one for an in-progress file migration. - -* 9. Other considerations for FLU/sequencer implementations - -** Append to existing file when possible - -In the earliest Machi FLU implementation, it was impossible to append -to the same file after ~30 seconds. For example: - -- Client: ~append(prefix="foo",...) -> {ok,"foo^suffix1",Offset1}~ -- Client: ~append(prefix="foo",...) -> {ok,"foo^suffix1",Offset2}~ -- Client: ~append(prefix="foo",...) -> {ok,"foo^suffix1",Offset3}~ -- Client: sleep 40 seconds -- Server: after 30 seconds idle time, stop Erlang server process for - the ~"foo^suffix1"~ file -- Client: ...wakes up... -- Client: ~append(prefix="foo",...) -> {ok,"foo^suffix2",Offset4}~ - -Our ideal append behavior is to always append to the same file. Why? -It would be nice if Machi didn't create zillions of tiny files if the -client appends to some prefix very infrequently. In general, it is -better to create fewer & bigger files by re-using a Machi file name -when possible. - -The sequencer should always assign new offsets to the latest/newest -file for any prefix, as long as all prerequisites are also true, - -- The epoch has not changed. (In AP mode, epoch change -> mandatory file name suffix change.) -- The latest file for prefix ~p~ is smaller than maximum file size for a FLU's configuration. - -* 10. Acknowledgments - -The source for the "migration-4.png" and "migration-3to4.png" images -come from the [[http://hibari.github.io/hibari-doc/images/migration-3to4.png][HibariDB documentation]]. - diff --git a/doc/cluster-of-clusters/migration-3to4.fig b/doc/cluster/migration-3to4.fig similarity index 85% rename from doc/cluster-of-clusters/migration-3to4.fig rename to doc/cluster/migration-3to4.fig index eadf105..0faad27 100644 --- a/doc/cluster-of-clusters/migration-3to4.fig +++ b/doc/cluster/migration-3to4.fig @@ -88,16 +88,16 @@ Single 4 0 0 50 -1 2 14 0.0000 4 180 495 4425 3525 ~8%\001 4 0 0 50 -1 2 14 0.0000 4 240 1710 5025 3525 ~25% total keys\001 4 0 0 50 -1 2 14 0.0000 4 180 495 6825 3525 ~8%\001 -4 0 0 50 -1 2 24 0.0000 4 270 1485 600 600 Cluster1\001 -4 0 0 50 -1 2 24 0.0000 4 270 1485 3000 600 Cluster2\001 -4 0 0 50 -1 2 24 0.0000 4 270 1485 5400 600 Cluster3\001 -4 0 0 50 -1 2 24 0.0000 4 270 1485 300 2850 Cluster1\001 -4 0 0 50 -1 2 24 0.0000 4 270 1485 2700 2850 Cluster2\001 -4 0 0 50 -1 2 24 0.0000 4 270 1485 5175 2850 Cluster3\001 -4 0 0 50 -1 2 24 0.0000 4 270 405 2100 2625 Cl\001 -4 0 0 50 -1 2 24 0.0000 4 270 405 6900 2625 Cl\001 4 0 0 50 -1 2 24 0.0000 4 270 195 2175 3075 4\001 4 0 0 50 -1 2 24 0.0000 4 270 195 4575 3075 4\001 4 0 0 50 -1 2 24 0.0000 4 270 195 6975 3075 4\001 -4 0 0 50 -1 2 24 0.0000 4 270 405 4500 2625 Cl\001 -4 0 0 50 -1 2 18 0.0000 4 240 3990 1200 4875 CoC locator, on the unit interval\001 +4 0 0 50 -1 2 24 0.0000 4 270 1245 600 600 Chain1\001 +4 0 0 50 -1 2 24 0.0000 4 270 1245 3000 600 Chain2\001 +4 0 0 50 -1 2 24 0.0000 4 270 1245 5400 600 Chain3\001 +4 0 0 50 -1 2 24 0.0000 4 270 285 2100 2625 C\001 +4 0 0 50 -1 2 24 0.0000 4 270 285 4500 2625 C\001 +4 0 0 50 -1 2 24 0.0000 4 270 285 6900 2625 C\001 +4 0 0 50 -1 2 24 0.0000 4 270 1245 525 2850 Chain1\001 +4 0 0 50 -1 2 24 0.0000 4 270 1245 2925 2850 Chain2\001 +4 0 0 50 -1 2 24 0.0000 4 270 1245 5325 2850 Chain3\001 +4 0 0 50 -1 2 18 0.0000 4 240 4350 1350 4875 Cluster locator, on the unit interval\001 diff --git a/doc/cluster/migration-3to4.png b/doc/cluster/migration-3to4.png new file mode 100644 index 0000000000000000000000000000000000000000..cbef7e922eb6c75ee158a843cf57bf8b5218a604 GIT binary patch literal 7756 zcmeI1cTiK?x5p1i6Qm2GA{}WWHoAZ)5DWs+L^??CE%bUrSBfCLO78|jl@>ve8tEmp z&>;u`sZtVNy!XAC_nUX;H?Pe5?>Cc7W_I>DXPv#*THnuFJL;txh??>SB>(`_&y^K4 z0f6`x01!!15EDk4CrPOZKbKvU_1ppAirVilQIJa; za3rW>xic@{AlaOqHHl@5lDd=Q1nhTpNi{i4q4tl&?4s)T**qsPI*iBR{GL zk=)Lu>S;_8)IG!-re}%fu%dD|BSGrlqj@E%Q;ohOn-f=h%OR?gd!sL+r<|+Vj}8$O z$_RV=(XwVATyr7aqC0|Uh@TRO|%}LoM zCUw%p=h1Qy;I=gnAfe@F?N{C8sNY0R-uTXnb= zJgArokGH9Bu?Y(PbW9H+aTS$rHcNDtAgwhocD;x0Y3r3a+|2kw!N|4Y8=|}swzU;* zOLPZqXINGaA`OH<>xR^DOi$+4vFr_4r+i~u?XeMM=Po5jk77L72uCckWx&)f>^Ww1 z=kl8=*Xmiz=O+o`?^-8`9F2o4)vDxz9LDNBU5!aZ9Qf8rf3BBFOz#wCjheX5UWX&n z=m*#i?+!fLoWO=#hl-SaJva&-31Gwo%$Ql78X44ZT7p* z)_KVy?$_ciFI(o``RHrN9XEjjc%X;YK3JezX{r?|#SINqkY2({+`zJR-ZPGm>9`gc zcIxb#e!t9QwCqZHjEZp0nGjMeYyQ?nqMRKBCf^u!uyb>Zt$k)Gsm_^BrFq`-QtBct zWThdrb#``0HFU}Jg|xlUxOr=rwR75_mCHaT=mC_@NmR~? zsLkPFIR^~*a;1aY+q3?(wIi-xbaCX?vgtnwnI&0sfaTLcPkWyOdbV8awl_lKR0z0d zc+@HoKN82wH-sCPAy_ccFS{AkYhHOmB61P_PqI`iN<8h7b?a? z!KI-h@BEwUUBKP@{3?Pw(tJDxNZTsGh7XU-$6i}cB%$C^c??13&|`Uh5JBQ6;^fAeiK-tE;-lm0fag!CtDkF;j{YauWea|zE&4j zatR;>j^4LcJXnl{7dN&M1I-UowOMgYz?359G(wb+jHbeo3JhkfSN?O#N{0j}?y042 zaiQ4hV(wb(Prl&zDD8Wr?u64_VVmONsSsir>O4uHVi`$_T zEG6wCG_Wv*ZIQE5mp726M72BgV(8*x<-#-^HYFL^fOlbkIorHe(wP`MeWj$&>DO(G z?=Qw=KV#n^OR^O9PrTS11SlYwXJh1yI!UnU$g2aCq)Pfq--Xh-A!>uy{ zqcoRVm{FN_ zpNJ}Qfv71&ndE0ri)+=vxiXZ@k(~bV6A)b#M|OIOZU;!q!eDbf&IN24_+ldZqY6}+ zq|&G9CWAaAHDIo7tSIOL7l=Ubh~L_kv0d>x-qDm4ER^SgiuHOu(pH#tugc{4G#-l!m?K2wXJ=iSbAFI9Ciq!XftjB_xwgRWr173kbP z1Um1OA$l3U2iNfsf!8^qMUXARxVkGLlK(4JSS<@~U?Ud@ADdZe3IcHi{l~0FJ6`b& zDE1lk8Ek?odpHG~>2tcMh39HmSPy&n{ThLNhdfZ|xFQLX3wka6)W50-O% zpyi_X?2l66%Nl9A8L(SZ5P7iD_b6q2kHCz*m(~AWRxto(~SBk#9BE*QAEYi ze*F7|ca>{kGve%~=MY{7BqLXPT_%_$=dKEg$Msoig1HWJv=e+%5yAq8sA|@~Hg%zN zYo7KMCIi&b)?$%mugCwDm7qc(GFKm>*lWkSTVst?G2riUaHFk|vpFSb2~#@*mqh!} z-0lK}@pq@7E}Z10)2N~|L`?&2n7h(3jpbyr5%W$G*M`1$u%dhTK3#k7ZNiH0-Yd$H zPRpLF5WIX!*M5mk;j!gGQ-D-&;F4Hm-<6yK@@=Cr`8{T{C0FLyjT2W5o(xc zaHj?l)gSZ8j7ws=c5YYy&p7%UcI@47-*ZlG{lMez%-}_N9YjL=gbvLCn@VU?yq`|9 z{M*PSmFSS`ns!K(z`aNcwqTAnka2b1NcxXaGZOm8fV{T6@qPIzM3HKS;_r*=-v33Z zKV+I%dOjz}6PA@kgZ^)6c4GO)Z>J-o$m9BpWi1~*&b9fC10>Z>^X`1SgXA%5!<%Vh zImlgz&Zg1YZnG|1Qd5g4j4}^QLSAnf`8k$FP~pMb}(wfe$uUO`>8%5Fi+Ib$%igNX*WE7#E1Uz+SMwybahkx_}WxF z?yYmhV&WDW1I6!HEx+@;yy45Pz}1+XMWIBFx<%XgG-r0_y4Y!3(f(W5L~b_~jnXq! ziZ|%=sOI-s6h$?Tkdl~0thnE!;R@q(|Jk6(Q&fA-sLqNfnXyP zhonVg7mvXr5@A{$GrWeP?E}`Qr{K~1Ug5J2`gAoQnWoP>m2=b$Gshqc^w#b|hqPYB zN>c1xM!}nyRgMI0o;#r3$L>jEUCLy3*n_Q`?IVHWCz@#!$9uY4 z`TZK|lkD^7j=L!a53%G_-T0Bs^4ed!pQ3Eu)H2B3ml6>YHb7vwropS8dzTjt1C>wp zm^#-Uf4U)0RcD=h#c773h;fuSu82*(t7E2x$#Op6TY^YVR&nh|i#73nv0`=8mG>*c zmzOd~Q+C`Gc5$yJ$TaRl4_P?|=nmrtR;lsbeI9;^EVld`N+e#z4Dx8Lis+IQ{LR>` zP;S$}Hxgy9Fn82ye(rF^S+8$h3LqrntK7ffLG>!f*a-Yv4b}ut5Q!Ivr1{^=KW5jw z+COFP7nx5N>y3lIGPlZ~B_Wo)rVyWRsimIqgqeQ!7QKHQfl$9HMuGPd>v!_5m9(Jw zeY%nzI8thb4>+OCH?OvDg%p7ed}?xKn^F>gqume7xGi`1yDAZ05J3gD^hde_K5G!7 zb6{V%1tptwi@E$jxTC$vW}crGxB>Il8b3v&pTKbfO%pNok#wWYgr#?!YE^&t?P zS`Ajw(bQi>atz-%UqY`Wxgsvy2m(}8hL|b!b6mXI)evamTzS{SDez*%geW+P4A&B5 zCXz-lxhCmb1({>H-?IRf`vim*>Xl|57G+q3FIuJ+XQj5O(f)?dUKs2l-?M{U0{Buv zOr8eSI^cA=57UG;N&ljt z%f$-XnBR?K>a|s_7gl;0$``$(E(I~q%=Roq{%Gh{+8)5Qf^;T!Cbg21HCg zR{=?m7_y(rX;N;6!3h`}b;hz3iiuV^%4h;8ytbAjv|h}v?;QHRXh6qEMotq_i;_2% zhtQ3}C3Q;}P19@4+@hbQAOzpvdi~!tsfx~j7Xl(iXVJi){GYZSVw7Rd*LaRW>3c7S z-AylS>L(9;fJ^GPC;2ukOSFvd6WXcRpX;K0e5%qE@HutcxW{?N4P%U{f=b-a>; zbvo&TLX)q^qw_6^k8pCZs%~On-kjBe8ve^~jPpvKp4497qgpYRtD-9UrA^Z1wW+$; z3YynS86@?)CdOuQq(F%&tHX7-6T<8lx$=06Z0iAYXn0;YI-h1-qp;~NO#OdSs{GSd zoDnVo0D`6d=6)FTu3XOG`Y~w#O`xUP3HbYvhAk@Xt9qEwV;z&EqIa_sSDz*9WKS43 z_3$G1_HEJao$~6~3tW=PxIdxJI5x0zK}xO)Nq!v#GvX(WW}h^dkFC-~9zW`OKjDSQ z)eDqmf6nAOAHo3l8$iZ{-4E)#zUR#-{z*Eab>p!E3VV{s^dgy zZaZrxNy6+>)Q$q`);@=VrTuHO_m!1A@_gXroi#spHE2UZvJJPJT3fyexz8uJW}q9! zNWySGsm2Xbz(O*{$C1oPcxZ?*>2XNqwfd)?=A43#p(9Lwb}Vt{%Nr>W489F7h>lEh z-d=6^bj1g~Y+1uvyI1Kt^kdiAkJ;v30vK1XW}#ubX_Gv(7q{#dkkYm@)1r#6x4YsI zblgYuQ)~|D)@e8RMILr(QgLJ$Lb zsM46_(CVQ*?WlTR+s+SjWgdx42$@&8SY^t23f@176Lc5Tbz|v0#szF%P|w&mdP6!} z>WwJNYxQg!M7!R%=oJNl7)6|uSr3|+Kvo@$nKoeC?`t5CV1UWViQ1!e>ZDhb6(a}h z4k&PQr9e232aB5Rk#20o7YJX>9tnA?>||C50E90!$@~xdxyQb%IUyD&i^`9qOL~== zEZH9xN?bfuvSSf5nlt$sXOAQ6h-#V{(I2m!^M)1IV%-X~VB@@IaKo>!vGXmOQsS)D zU$;Dh@u>0vF_p7}!rQkT2iLB3udU;Ihz9`u)(mfXRYE% zUyW#v(X&!~ew_|}=8p9$U8bFi;Bo`Z8he)9t+{~I8)~@&pzHf%GI+VoCXM6!|4KUKs zk;TblQU{TOM4Aq*jm$olk*CVaV;@Qe1kkU#i{rkJ4mL`QO&=&BuElTQjSCCPM?QQF zb7ChSowJ7Pa>HxF_7RUu#;?J0(uyu^1Q=kdc3*xnxbSSbFveI)`cM(IsO*E?`7k}= zi_i01_%eNNPrYyp5wkmSm~=zUd+)cVi(ds)Q`VStTMbb%owKt!LJv#Wmw9BEdMD5( z-vbWDjoG@I%3a-J)5JdU&%V^(-eV?Ka6%~QBHAC}EwGOA|18l zoe{*jF{iO`+e1p{qry#&{ zMKy&_Pfejw|KH~*Br*SfjxyeU8Q`G%a?j%;Ymk=`%ENdtDf5sD;9Bc{rO}-!8W9&V$;mZd&Xr9h6jCS|ri8ik70KtrDc>5~LDS5<|7SwQH=QNsVc@ zqD81g5Cm`Sn1ZxgVy1|cn1hHIzDMl${mysJ`M&G?aa({I#BqW)@bN-H>fxkN=%02)el4vWFs}hB6N5?nY9$mLZOGxao*!-0Y z4~kI%2CzKu-1on4epAl{>7(i7igb3<4EA{hjFyl92kQZkK5qU_%E3O~zG%H*$Z_!q zJ>Yq>8FXA(Jj5RXIey>#ma-ud<)*Byc~SG+ap*2(Wo0nR)m_i(n(^P$0R%bz(BJ=& z9tac^6r>q+K@*Ac0BPy!>VnRl2c18!0gTW^8(bD@9#{OzaDqkk{ti~c(*AU;sA(<6|U<~h*+ZIZvI`+vIkUuJFg zY)unKwY3-+NR6H$%FW3iiMoSCdP7Za-BdPth(!Buxv8wHsigx3{qLjx(_8SSSv^aX zryCIOW>iouFzEki`+GbXv>EdMl7g-Lid%sMLw5ly{i|5eT}UIAnuLTx#EonJxDzZn zl@X2Ve}B*@3AVlgvgqjZcMq)U|9R(vv}Z=1^_3di*qHc{xz$y!fM&{w)9=7EtKnBEGwRL`Go|5bK}9z-C`T8-)h~2>5k<8m%iFzWL;)*(nr|? zvx8Dh&Ij24cr$$+lNtAB3HVps)@TI0^mO>t_x8v!g#R?(aU|TmJYlQK7A4 zTP-|!IEuoEcY^!Bu`8_z{BhiPe7A#{xA%bAWAx&{0FlnmZv&=xXQZDBW4`?wSbomv ziC?S)*DLnG3Q4*U4710>Di_Wf82FQ5BcHrg7ajH6I&iq~rM4m{(B~19ynNuXgtYQG zh@>WS99ACw6Rg_M`X1CFO4>!OmezdjzLL>U9EIVAxg+@FCvo)gszCHzon)=8%tQg1 zxqafr|Il|bt`0PQv#%<`jBCHQFTKgEa>U!u)*|fTDJ7sr64GraYuzx=5fhItzItE$ z&vj#gGDG%LpCb8E5hM@9dxLk?B|A5|js+HLc31XG6}6sWVeV;iE~|%m@>6n;5WH2b zGwkBQnjI7A)R{zj?=iX!E686lo>URv_O=`o1I{!UL(ux64GKQ=JNYx#qD?bjMEnpF z?p#-%=*Q70n7j&seSlA0^@T^#PAH6#7tE7&Cb2%(164D;-Q{NnOjzHe?HJ+y^=;HU z6dqNs`;|XA#i^h}2{ko08G22z$mO&kae7h@HzLY3J?l`^MOGhoy|7Y#2lu0-L4C=O z(=9(t_FvZi{A=@}*zz-xGaGs7RPb<7>P2^4x0~=GqNwH`-K8t>u*7oUzR@RDh{FtTnZd&PFhA6p+4-3?Y0TD9jK^YBQL~~eAE_v z2P>LXrtZ|)ks!K|&VG?~zhjasCAJOaWAHXelrP9zMFf;~9>SewRG{N;<38kcLSmh{ zd)P~|*s>bz{^pu{9tekQboNn_RvTI0E1tUIyS9y}5&6SgS^Jiz(%YYlB85Rx7cw>4 zIt0)-HH?Uc<^M`wOXAq>f2uS={khWTn>CrG)2+!nRL4oopn42D8otTR^-W{lETBMp zUFmHvnX0?tbkpu3u-iJ2-*tu$u|XXUZ0$pk;J)PQwS z8|!)_zsE;AC@qu!UAtuESv(g(x=n49XAV4^N}7|G zQpq4?N4N8;p6{_;Ixrg+yQ+;C?J1on&XX5@q6Ca6j=YHNZ{|#eg=v6cXj%!!a>m zU+Nx1UoADd9okz}G}l~Zm-+N?8X|9yXauqAv34&p;8pL)!@!f341DQlvzGS+e%SeT zw`h_g_XOqDr+~~4ZzHl{BZ}jfoXx3g95fl&ZJGY1_$9lMfZUKA>>#!f6>rLG8Ed<6 zh=z{EO$ddo8Jm4?q-~J)7&OF=i}hM^Dti|ltV$pD=$fdf!JqLd!`F?a~-O9(JQ>S=He~j<(3|m^S9LNn>(u4l)5@33F+J z)376rHEc`aQcSZ$bOIu;JF-7fbC|FY1#f@lWk)IMC?1oexmq_@8s$KIV%5Vr0|!mC zJ_WsJTBSLIGap|HHS-r%B4@r^$2c0WnnJ)*xRo>T!MiiUTmPk@1J8eZ`8!S&?-Y8W zr9tf@=VL5gB~Kv&IQf6q1Q$KeVjUuI_-&2f^`mAAvC4 zRdZtfy-#rpM0&WgB-pcnk(fD{Op>`eE$#1kv7(LJjxJ2!#g~`Nkc&kG!dQohXA!Om zhMV*)A-%3rd(tfK(|gCR0D&Ig+$9e6bsDf- z-&z({jLIJaXxqyhl48z8%MOv#E5q;X7bCd?L*y!n^QXl*e&7q_7%hmO_)VN~DFDe` zQQIcH+1X0iOk`WXq8l!%`58?>H)ylnT8`B^?Sq)3szZUsTC3i!^UpobG0EO{x`(_Q8C-csi^A@|=0>PiDX4dK3?u zNR3e(baAB>srYu8su_@8*`P>*7#Y~y#j#-DqyU>D#-I1{C%UaD&L%OFGkTmr+i1EM z;uxeFm8Zja+2>;@hiVGRTq}WM{T1`HxR25bDzQ@qcodRkJEv0G@Q8*+(R!J1?TUC; zjjRU25!-3r9}1<3O_m7GD|=p6Lu2kydGx-rrV8rfxWRJ=^(!k~*Pbg>RE-a`R@96& z%kqK|8T=;bXtR2X==bVH)$rKwx%U%K6P!&I5ky3W+kEeNY=M2`FwMNtE0A1+im@Bc zfaAOZ;Y4-I#^EUhhqXlBOP_S>)jN}89Iu+K`At{V<9MQ3x)ME9*?K(ZML{koRYZ{;F8aUY4)3EIwha$w&rwLfLxE(V%ZX=FmRP z-*tY=?{2ET7`BmTMYemn)AjY}6H{N`ELy`O;%Bp5Osfe>Fq49eV9UZwb>n9&b>yks zNm;D$2D>6$XcTi;iLqZ4{6USV9@l_}*5e7HGmfnw9(C!+v;ubgVv+Fr^g4v!NV!yR zSDjubxHsRCi3m%aA;AK!?L`zPTQBAr3o8ajOhGl&FQII&=3=)QE%2bNHf~>QMx?7p zjBfmr=HX0h5ew?1VlZyHu88m~yzlh&po#7w>wxCLMMW4^$ng`neUMK0Cd)O)M9_A0 zpLyOHl`-|oB+$;Ivc00AbFFL?wecFn!CPQVs|%xPAr~v4B9UG7{O*hp!!iD6T8ocM zA;H-VJ`StTuJ~lYPH@3ad#h)NCO_C#D)Tq)fulG;bh&`6O5;* z=L|n9a*fh(h)HbH5S$=5BMDw;G=IcRL8se7ugpeqyd*HKr>ddT#k+~K zt_5$_$e2$Kagaj`ANq$b!C~r|&d=z>&sw-|(J8(4_8>-cbvt)6rfc%(C-=FZ_&uRs ze)y2SjRM-!j;c#GlNuoi+=nST-ed`~*l3c~(B}6^jY$qY9oJjZcF4qKiPJl*oAu5`x1SY_EbLCOlw=0h7FW9Q}9zo#HX`M8$FZbt}!ufwMgBguy z%lBrl&XcCHO9gf}yYov^`%~@xE>?ub9n-PJPnr4k<>(l!4W7Ll9D_B{N;QM4KQdQb zhh3^UtiO;)VU#kAsa}J2><|Q27zuC9IZRPAIMen8^2O2R8w~f^PA5?i_kFNpL$UY0 zq4rYe9g0mx52aIE^pfB`3jYdX@V>l436`0qgvwRCvDP2AXFjs+SF(?oRm>WAZDfP< zDDQfYTa$XqI;r=y>8o``?$*)WDh6i)&4_{FWc!}iKcV$~Zee2+D}7oJ3Lly%HJ!dn z{9xYQ=8UPj&QSZJ%Mh0(K^HBvs*SoZ>Ofi4`&5drG^xf@a#++x3z|!Etns*OzSe@O zgSn=GQvW^Lt4oupkTOC>{l}ot0vx=1(1mMvJP=O{MNT`4($(1q=jOwbv(yM-%-6iz zdpcC<#!YUy#rI(5nl<2x(3TOB&FP@h{V;(aeRv^eb2fHfkKLXW6is3uUZ-NQqqI2U z#%h?`+k*rt4_GHxwlOl$tX>AIcXcnU3&P<3fxhGH7X+VagFDZwbz8IUE_`Nna?3Q4 z6VuYTyGeiEsrF2Hkz@gA3*XLr^yaBML zMf<7d-^EA-pn-s4MALu_RCf?*Fd{*hB_gzT`~^P?ICO;mW=k0yd3|LOSyw%W+}PWf+n+tYaqtQ}D68u1m4%ovhv%Z% zYhZMw3LX3gd*nqv@ofK(`CT`5L=a?$ggBw(15usKa-i+}%YAH>T~EY*rtJ)=Jc}%h zTsSE^l=e1BQt4=B*s(pyj}AOM@sbG9WEwxZkjBYjRmpip*6xvBP2mgoP_0LTq`uTK zn2BoPk2JYL)qbAq>us-@Wo# zdfgEZerG6nF}x)ujitgtOa+AU;$G?Hu^&JG(6?$YIyU?0*_G`PJ(PQEHF2|-PV4`$ zaF&mJTy4=_FzD7^gu&j;Pgd+~Z23*FKG%eb%e<|+RmAH+5sjDMZ52@$C|lc?lX+W3 zQ~+erzz*WKts-s*%BH(=?bTKhRRE;$JxP8`)wWac&0=n*K=%1!rDlY**v9mX}zFR^Ax-LD-^3G1N%DZ$BPQnVi#X-`J#O8s3g>OAN%0n75)N*6xS&;5h1SkcL4qO zeIcV!P$<->CTy*FVxodk#K=jNV8oMF`#|S9S5l{buW|=LT=XS9`szB?z_7UQfwtVR zmfjr;^Zp{M=Y2`3p{42!Z2sBTi?e?9+BJvMK*Y101Ck@8MLTQPvQf4(1o1hOnhppHhuV(SQ*)4Wh+@xHfpKNlGNZ0bKEGo+cCCxdRl(q$ zU%A32{^G)=Oy(wO67kJ9_V?ntnCan^kbjP!aY#n&N6q{LD_QSUdL8-PlkxW1XSSiA zqGG`~`Ht+!?#UiG+oD@xnoR!Tw8^@aq{n>q#gncYIz*Wi2lEFX3?M8R$u}NO`Nx1o zEe4;d8$VjAY4B%};>;AE3U!52tIHiqg*aQ&FYVXLJOLNyj;DmHer8&(&1sFe6&)K=y~ZMT!dRQG_m3J6v4Yc9+OflZ^_n2G&zj@`Yy<}wbN%n4>Y&yu=5zU>96L1X_e3z z4V}Tw;vN;B2Ed6W>it+NzGwk^`OVFtnHpPs$93`M1#Xi= z%jN%{9pURC?Y}mLBhIz|Kpxf{w>ZQ7_7P*y!~Jk$=lZkS7|3jDS$Yb69WJr2Viscj z0(N_roolu`^&V22LBCU+?CA<` z!-SjsRlq1eZ$o;Xy6PE9nnM4qOW;H}%tsdbH=2=u`Wn8QvlwCxy)}zpgSru^+_wSY z+8dc!{i*pU)k;|>YU7G+5$^MVhOOC;~wa9ITPIWRvWbk{{Fv&tltr<36 zHv5^+jT|yUkATSJn-+qzU+ad){YK3G!XLGtE^{RW=*IE1Sya=)^unP26x+bw~>t@hfLW z`}){=;ww~T;XA=)ErEUeE9dKKMDP57nfK+6kxZ~-BW!6Dnu-f)x~R$6o3YS*jMrnP zQwVChO5`-Fz&TBU?V=+(+Lei5rWXwoIhLuuI-37FPk&N1n#OcF24ahV%jkS8)zt}v zX~D!A0&*nX(L3)irL*Bf*e6XamnH*)#>BFisNCgg6^F))gbd>{fiQKz$0c*5(ov6K zv^sEf-li(C#-AK}J5b)C%ws^-vNU^T>Dz(oD3TwtN->=Ct!so=IuDxLnX#ZseEDj% z?;hXk==hPDleabk=hu7Qze%_Pj`1*f=+VG}(e>bMN<0~_Qym>D(U6j`;^`OBQ#Mi> z9-Jud0^)8#+$I&CXjO0--#y%8s9t%hgNq2G!}wih)f$0$B!&*RXE@2Fq_C)^!_K;t zFCVH|-@WdG@EGQ6gnVjnam7c`T$4iY51Kqys>`fh&#P0~pQd=($3>&8mpSMbMG5kb z?RJ*KwO8LkBSJOiU@oL?hZF<}pE`i}#f;FgI&eJbu*v)vVL%+NjoRQ8wijUSr4aj`B#EbvuwznlKvTt?e5-=Qf?s62A<|*%d{Kbq1lQd`Z()7b zwNlkVon*I%U{a8-?@DgcYpTmx$`iU`mI=ce-EI!cl{e>1&v)?jnnvY#1wjs-9vQ=) zEyY)-p|(mDi8trwf0V*TuzMJS`-#H)^M3SIii2nPbS|a|e!lx*i|$p&X6vL^!@-4i ztFuf!fwjF-UA3!q)W~^Z`PpI9Y2DXEOF$BZ)Vl^Bi0F~4-tLn?0?wyxzz!&t*d`-w zDqVic?9R)73>^cc%-|P~Kw6T79LM5ux#0uU<}R84*$OIFR_T_HE%Dq7YCmmBXN#c? Zx8K bin, where ~Map~ partitions + the unit interval into bins. + +Our adaptation is in step 1: we do not hash any strings. Instead, we +simply choose a number on the unit interval. This number is called +the "cluster locator". + +As described later in this doc, Machi file names are structured into +several components. One component of the file name contains the "cluster +locator"; we use the number as-is for step 2 above. + +* 3. A simple illustration + +We use a variation of the Random Slicing hash that we will call +~rs_hash_with_float()~. The Erlang-style function type is shown +below. + +#+BEGIN_SRC erlang +%% type specs, Erlang-style +-spec rs_hash_with_float(float(), rs_hash:map()) -> rs_hash:chain_id(). +#+END_SRC + +I'm borrowing an illustration from the HibariDB documentation here, +but it fits my purposes quite well. (I am the original creator of that +image, and also the use license is compatible.) + +#+CAPTION: Illustration of 'Map', using four Machi chains + +[[./migration-4.png]] + +Assume that we have a random slicing map called ~Map~. This particular +~Map~ maps the unit interval onto 4 Machi chains: + +| Hash range | Chain ID | +|-------------+----------| +| 0.00 - 0.25 | Chain1 | +| 0.25 - 0.33 | Chain4 | +| 0.33 - 0.58 | Chain2 | +| 0.58 - 0.66 | Chain4 | +| 0.66 - 0.91 | Chain3 | +| 0.91 - 1.00 | Chain4 | + +Assume that the system chooses a chain locator of 0.05. +According to ~Map~, the value of +~rs_hash_with_float(0.05,Map) = Chain1~. +Similarly, ~rs_hash_with_float(0.26,Map) = Chain4~. + +* 4. Use of the cluster namespace: name separation plus chain type + +Let us assume that the cluster framework provides several different types +of chains: + +| | | Consistency | | +| Chain length | Namespace | Mode | Comment | +|--------------+------------+-------------+----------------------------------| +| 3 | normal | eventual | Normal storage redundancy & cost | +| 2 | reduced | eventual | Reduced cost storage | +| 1 | risky | eventual | Really, really cheap storage | +| 7 | paranoid | eventual | Safety-critical storage | +| 3 | sequential | strong | Strong consistency | +|--------------+------------+-------------+----------------------------------| + +The client may want to choose the amount of redundancy that its +application requires: normal, reduced cost, or perhaps even a single +copy. The cluster namespace is used by the client to signal this +intention. + +Further, the cluster administrators may wish to use the namespace to +provide separate storage for different applications. Jane's +application may use the namespace "jane-normal" and Bob's app uses +"bob-reduced". Administrators may definite separate groups of +chains on separate servers to serve these two applications. + +* 5. In its lifetime, a file may be moved to different chains + +The cluster management scheme may decide that files need to migrate to +other chains. The reason could be for storage load or I/O load +balancing reasons. It could be because a chain is being +decommissioned by its owners. There are many legitimate reasons why a +file that is initially created on chain ID X has been moved to +chain ID Y. + +* 6. Floating point is not required ... it is merely convenient for explanation + +NOTE: Use of floating point terms is not required. For example, +integer arithmetic could be used, if using a sufficiently large +interval to create an even & smooth distribution of hashes across the +expected maximum number of chains. + +For example, if the maximum cluster size would be 4,000 individual +Machi chains, then a minimum of 12 bits of integer space is required +to assign one integer per Machi chain. However, for load balancing +purposes, a finer grain of (for example) 100 integers per Machi +chain would permit file migration to move increments of +approximately 1% of single Machi chain's storage capacity. A +minimum of 12+7=19 bits of hash space would be necessary to accommodate +these constraints. + +It is likely that Machi's final implementation will choose a 24 bit +integer (or perhaps 32 bits) to represent the cluster locator. + +* 7. Proposal: Break the opacity of Machi file names, slightly. + +Machi assigns file names based on: + +~ClientSuppliedPrefix ++ "^" ++ SomeOpaqueFileNameSuffix~ + +What if some parts of the system could peek inside of the opaque file name +suffix in order to look at the cluster location information that we might +code in the filename suffix? + +We break the system into parts that speak two levels of protocols, +"high" and "low". + ++ The high level protocol is used outside of the Machi cluster ++ The low level protocol is used inside of the Machi cluster + +Both protocols are based on a Protocol Buffers specification and +implementation. Other protocols, such as HTTP, will be added later. + +#+BEGIN_SRC + +-----------------------+ + | Machi external client | + | e.g. Riak CS | + +-----------------------+ + ^ + | Machi "high" API + | ProtoBuffs protocol Machi cluster boundary: outside +......................................................................... + | Machi cluster boundary: inside + v + +--------------------------+ +------------------------+ + | Machi "high" API service | | Machi HTTP API service | + +--------------------------+ +------------------------+ + ^ | + | +------------------------+ + v v + +------------------------+ + | Cluster bridge service | + +------------------------+ + ^ + | Machi "low" API + | ProtoBuffs protocol + +----------------------------------------+----+----+ + | | | | + v v v v + +-------------------------+ ... other chains... + | Chain C1 (logical view) | + | +--------------+ | + | | FLU server 1 | | + | | +--------------+ | + | +--| FLU server 2 | | + | +--------------+ | In reality, API bridge talks directly + +-------------------------+ to each FLU server in a chain. +#+END_SRC + +** The notation we use + +- ~N~ = the cluster namespace, chosen by the client. +- ~p~ = file prefix, chosen by the client. +- ~L~ = the cluster locator (a number, type is implementation-dependent) +- ~Map~ = a mapping of cluster locators to chains +- ~T~ = the target chain ID/name +- ~u~ = a unique opaque file name suffix, e.g. a GUID string +- ~F~ = a Machi file name, i.e., a concatenation of ~p^L^N^u~ + +** The details: cluster file append + +0. Cluster client chooses ~N~ and ~p~ (i.e., cluster namespace and + file prefix) and sends the append request to a Machi cluster member + via the Protocol Buffers "high" API. +1. Cluster bridge chooses ~T~ (i.e., target chain), based on criteria + such as disk utilization percentage. +2. Cluster bridge knows the cluster ~Map~ for namespace ~N~. +3. Cluster bridge choose some cluster locator value ~L~ such that + ~rs_hash_with_float(L,Map) = T~ (see below). +4. Cluster bridge sends its request to chain + ~T~: ~append_chunk(p,L,N,...) -> {ok,p^L^N^u,ByteOffset}~ +5. Cluster bridge forwards the reply tuple to the client. +6. Client stores/uses the file name ~F = p^L^N^u~. + +** The details: Cluster file read + +0. Cluster client sends the read request to a Machi cluster member via + the Protocol Buffers "high" API. +1. Cluster bridge parses the file name ~F~ to find + the values of ~L~ and ~N~ (recall, ~F = p^L^N^u~). +2. Cluster bridge knows the Cluster ~Map~ for type ~N~. +3. Cluster bridge calculates ~rs_hash_with_float(L,Map) = T~ +4. Cluster bridge sends request to chain ~T~: + ~read_chunk(F,...) ->~ ... reply +5. Cluster bridge forwards the reply to the client. + +** The details: calculating 'L' (the Cluster locator) to match a desired target chain + +1. We know ~Map~, the current cluster mapping for a cluster namespace ~N~. +2. We look inside of ~Map~, and we find all of the unit interval ranges + that map to our desired target chain ~T~. Let's call this list + ~MapList = [Range1=(start,end],Range2=(start,end],...]~. +3. In our example, ~T=Chain2~. The example ~Map~ contains a single + unit interval range for ~Chain2~, ~[(0.33,0.58]]~. +4. Choose a uniformly random number ~r~ on the unit interval. +5. Calculate locator ~L~ by mapping ~r~ onto the concatenation + of the cluster hash space range intervals in ~MapList~. For example, + if ~r=0.5~, then ~L = 0.33 + 0.5*(0.58-0.33) = 0.455~, which is + exactly in the middle of the ~(0.33,0.58]~ interval. + +** A bit more about the cluster namespaces's meaning and use + +- The cluster framework will provide means of creating and managing + chains of different types, e.g., chain length, consistency mode. +- The cluster framework will manage the mapping of cluster namespace + names to the chains in the system. +- The cluster framework will provide query functions to map a cluster + namespace name to a cluster map, + e.g. ~get_cluster_latest_map("reduced") -> Map{generation=7,...}~. + +For use by Riak CS, for example, we'd likely start with the following +namespaces ... working our way down the list as we add new features +and/or re-implement existing CS features. + +- "standard" = Chain length = 3, eventually consistency mode +- "reduced" = Chain length = 2, eventually consistency mode. +- "stanchion7" = Chain length = 7, strong consistency mode. Perhaps + use this namespace for the metadata required to re-implement the + operations that are performed by today's Stanchion application. + +* 8. File migration (a.k.a. rebalancing/reparitioning/resharding/redistribution) + +** What is "migration"? + +This section describes Machi's file migration. Other storage systems +call this process as "rebalancing", "repartitioning", "resharding" or +"redistribution". +For Riak Core applications, it is called "handoff" and "ring resizing" +(depending on the context). +See also the [[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer][Hadoop file balancer]] for another example of a data +migration process. + +As discussed in section 5, the client can have good reason for wanting +to have some control of the initial location of the file within the +chain. However, the chain manager has an ongoing interest in +balancing resources throughout the lifetime of the file. Disks will +get full, hardware will change, read workload will fluctuate, +etc etc. + +This document uses the word "migration" to describe moving data from +one Machi chain to another within a cluster system. + +A simple variation of the Random Slicing hash algorithm can easily +accommodate Machi's need to migrate files without interfering with +availability. Machi's migration task is much simpler due to the +immutable nature of Machi file data. + +** Change to Random Slicing + +The map used by the Random Slicing hash algorithm needs a few simple +changes to make file migration straightforward. + +- Add a "generation number", a strictly increasing number (similar to + a Machi chain's "epoch number") that reflects the history of + changes made to the Random Slicing map +- Use a list of Random Slicing maps instead of a single map, one map + per chance that files may not have been migrated yet out of + that map. + +As an example: + +#+CAPTION: Illustration of 'Map', using four Machi chains + +[[./migration-3to4.png]] + +And the new Random Slicing map for some cluster namespace ~N~ might look +like this: + +| Generation number / Namespace | 7 / reduced | +|-------------------------------+-------------| +| SubMap | 1 | +|-------------------------------+-------------| +| Hash range | Chain ID | +|-------------------------------+-------------| +| 0.00 - 0.33 | Chain1 | +| 0.33 - 0.66 | Chain2 | +| 0.66 - 1.00 | Chain3 | +|-------------------------------+-------------| +| SubMap | 2 | +|-------------------------------+-------------| +| Hash range | Chain ID | +|-------------------------------+-------------| +| 0.00 - 0.25 | Chain1 | +| 0.25 - 0.33 | Chain4 | +| 0.33 - 0.58 | Chain2 | +| 0.58 - 0.66 | Chain4 | +| 0.66 - 0.91 | Chain3 | +| 0.91 - 1.00 | Chain4 | + +When a new Random Slicing map contains a single submap, then its use +is identical to the original Random Slicing algorithm. If the map +contains multiple submaps, then the access rules change a bit: + +- Write operations always go to the newest/largest submap. +- Read operations attempt to read from all unique submaps. + - Skip searching submaps that refer to the same chain ID. + - In this example, unit interval value 0.10 is mapped to Chain1 + by both submaps. + - Read from newest/largest submap to oldest/smallest submap. + - If not found in any submap, search a second time (to handle races + with file copying between submaps). + - If the requested data is found, optionally copy it directly to the + newest submap. (This is a variation of read repair (RR). RR here + accelerates the migration process and can reduce the number of + operations required to query servers in multiple submaps). + +The cluster manager is responsible for: + +- Managing the various generations of the cluster Random Slicing maps for + all namespaces. +- Distributing namespace maps to cluster bridges. +- Managing the processes that are responsible for copying "cold" data, + i.e., files data that is not regularly accessed, to its new submap + location. +- When migration of a file to its new chain is confirmed successful, + delete it from the old chain. + +In example map #7, the cluster manager will copy files with unit interval +assignments in ~(0.25,0.33]~, ~(0.58,0.66]~, and ~(0.91,1.00]~ from their +old locations in chain IDs Chain1/2/3 to their new chain, +Chain4. When the cluster manager is satisfied that all such files have +been copied to Chain4, then the cluster manager can create and +distribute a new map, such as: + +| Generation number / Namespace | 8 / reduced | +|-------------------------------+-------------| +| SubMap | 1 | +|-------------------------------+-------------| +| Hash range | Chain ID | +|-------------------------------+-------------| +| 0.00 - 0.25 | Chain1 | +| 0.25 - 0.33 | Chain4 | +| 0.33 - 0.58 | Chain2 | +| 0.58 - 0.66 | Chain4 | +| 0.66 - 0.91 | Chain3 | +| 0.91 - 1.00 | Chain4 | + +The HibariDB system performs data migrations in almost exactly this +manner. However, one important +limitation of HibariDB is not being able to +perform more than one migration at a time. HibariDB's data is +mutable, and mutation causes many problems already when migrating data +across two submaps; three or more submaps was too complex to implement +quickly. + +Fortunately for Machi, its file data is immutable and therefore can +easily manage many migrations in parallel, i.e., its submap list may +be several maps long, each one for an in-progress file migration. + +* 9. Other considerations for FLU/sequencer implementations + +** Append to existing file when possible + +The sequencer should always assign new offsets to the latest/newest +file for any prefix, as long as all prerequisites are also true, + +- The epoch has not changed. (In AP mode, epoch change -> mandatory + file name suffix change.) +- The locator number is stable. +- The latest file for prefix ~p~ is smaller than maximum file size for + a FLU's configuration. + +The stability of the locator number is an implementation detail that +must be managed by the cluster bridge. + +Reuse of the same file is not possible if the bridge always chooses a +different locator number ~L~ or if the client always uses a unique +file prefix ~p~. The latter is a sign of a misbehaved client; the +former is a poorly-implemented bridge. + +* 10. Acknowledgments + +The original source for the "migration-4.png" and "migration-3to4.png" images +come from the [[http://hibari.github.io/hibari-doc/images/migration-3to4.png][HibariDB documentation]]. + diff --git a/doc/flu-and-chain-lifecycle.org b/doc/flu-and-chain-lifecycle.org index 4672080..d81b326 100644 --- a/doc/flu-and-chain-lifecycle.org +++ b/doc/flu-and-chain-lifecycle.org @@ -14,10 +14,10 @@ complete yet, so we are working one small step at a time. + FLU and Chain Life Cycle Management + Terminology review + Terminology: Machi run-time components/services/thingies - + Terminology: Machi data structures - + Terminology: Cluster-of-cluster (CoC) data structures + + Terminology: Machi chain data structures + + Terminology: Machi cluster data structures + Overview of administrative life cycles - + Cluster-of-clusters (CoC) administrative life cycle + + Cluster administrative life cycle + Chain administrative life cycle + FLU server administrative life cycle + Quick admin: declarative management of Machi FLU and chain life cycles @@ -57,10 +57,8 @@ complete yet, so we are working one small step at a time. quorum replication technique requires ~2F+1~ members in the general case.) -+ Cluster: this word can be used interchangeably with "chain". - -+ Cluster-of-clusters: A collection of Machi clusters where files are - horizontally partitioned/sharded/distributed across ++ Cluster: A collection of Machi chains that are used to store files + in a horizontally partitioned/sharded/distributed manner. ** Terminology: Machi data structures @@ -75,13 +73,13 @@ complete yet, so we are working one small step at a time. to another, e.g., when the chain is temporarily shortened by the failure of a member FLU server. -** Terminology: Cluster-of-cluster (CoC) data structures +** Terminology: Machi cluster data structures + Namespace: A collection of human-friendly names that are mapped to groups of Machi chains that provide the same type of storage service: consistency mode, replication policy, etc. + A single namespace name, e.g. ~normal-ec~, is paired with a single - CoC chart (see below). + cluster map (see below). + Example: ~normal-ec~ might be a collection of Machi chains in eventually-consistent mode that are of length=3. + Example: ~risky-ec~ might be a collection of Machi chains in @@ -89,32 +87,31 @@ complete yet, so we are working one small step at a time. + Example: ~mgmt-critical~ might be a collection of Machi chains in strongly-consistent mode that are of length=7. -+ CoC chart: Encodes the rules which partition/shard/distribute a - particular namespace across a group of chains that collectively - store the namespace's files. - + "chart: noun, a geographical map or plan, especially on used for - navigation by sea or air." ++ Cluster map: Encodes the rules which partition/shard/distribute + the files stored in a particular namespace across a group of chains + that collectively store the namespace's files. -+ Chain weight: A value assigned to each chain within a CoC chart ++ Chain weight: A value assigned to each chain within a cluster map structure that defines the relative storage capacity of a chain within the namespace. For example, a chain weight=150 has 50% more capacity than a chain weight=100. -+ CoC chart epoch: The version number assigned to a CoC chart. ++ Cluster map epoch: The version number assigned to a cluster map. * Overview of administrative life cycles -** Cluster-of-clusters (CoC) administrative life cycle +** Cluster administrative life cycle -+ CoC is first created -+ CoC adds namespaces (e.g. consistency policy + chain length policy) -+ CoC adds/removes chains to a namespace to increase/decrease the ++ Cluster is first created ++ Adds namespaces (e.g. consistency policy + chain length policy) to + the cluster ++ Chains are added to/removed from a namespace to increase/decrease the namespace's storage capacity. -+ CoC adjusts chain weights within a namespace, e.g., to shift files ++ Adjust chain weights within a namespace, e.g., to shift files within the namespace to chains with greater storage capacity resources and/or runtime I/O resources. -A CoC "file migration" is the process of moving files from one +A cluster "file migration" is the process of moving files from one namespace member chain to another for purposes of shifting & re-balancing storage capacity and/or runtime I/O capacity. @@ -155,7 +152,7 @@ described in this section. As described at the top of http://basho.github.io/machi/edoc/machi_lifecycle_mgr.html, the "rc.d" config files do not manage "policy". "Policy" is doing the right -thing with a Machi cluster-of-clusters from a systems administrator's +thing with a Machi cluster from a systems administrator's point of view. The "rc.d" config files can only implement decisions made according to policy. diff --git a/src/machi_lifecycle_mgr.erl b/src/machi_lifecycle_mgr.erl index 385c607..80ea8b4 100644 --- a/src/machi_lifecycle_mgr.erl +++ b/src/machi_lifecycle_mgr.erl @@ -950,7 +950,7 @@ make_pending_config(Term) -> %% The largest numbered file is assumed to be all of the AST changes that we %% want to apply in a single batch. The AST tuples of all files with smaller %% numbers will be concatenated together to create the prior history of -%% cluster-of-clusters. We assume that all transitions inside these earlier +%% the cluster. We assume that all transitions inside these earlier %% files were actually safe & sane, therefore any sanity problem can only %% be caused by the contents of the largest numbered file. -- 2.45.2 From 2932a17ea66fa1868d6b5dd18a1b875f4c29929d Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 23 Dec 2015 15:47:38 +0900 Subject: [PATCH 02/53] append_chunk API refactoring; all tests pass; todo tasks remain --- include/machi.hrl | 12 ++ src/machi.proto | 26 ++-- src/machi_cr_client.erl | 170 ++++++++------------ src/machi_dt.erl | 16 +- src/machi_flu1_append_server.erl | 41 +++-- src/machi_flu1_client.erl | 213 ++++++++------------------ src/machi_flu1_net_server.erl | 49 +++--- src/machi_flu_filename_mgr.erl | 65 ++++---- src/machi_pb_high_client.erl | 44 ++++-- src/machi_pb_translate.erl | 69 ++++++--- src/machi_proxy_flu1_client.erl | 68 ++------ src/machi_util.erl | 61 +++++--- src/machi_yessir_client.erl | 4 + test/machi_admin_util_test.erl | 6 +- test/machi_ap_repair_eqc.erl | 7 +- test/machi_cr_client_test.erl | 35 +++-- test/machi_flu1_test.erl | 42 ++--- test/machi_flu_psup_test.erl | 7 +- test/machi_pb_high_client_test.erl | 12 +- test/machi_proxy_flu1_client_test.erl | 63 ++++---- test/machi_test_util.erl | 2 +- 21 files changed, 489 insertions(+), 523 deletions(-) diff --git a/include/machi.hrl b/include/machi.hrl index f825556..6b35205 100644 --- a/include/machi.hrl +++ b/include/machi.hrl @@ -43,3 +43,15 @@ -define(DEFAULT_COC_NAMESPACE, ""). -define(DEFAULT_COC_LOCATOR, 0). + +-record(ns_info, { + version = 0 :: machi_dt:namespace_version(), + name = "" :: machi_dt:namespace(), + locator = 0 :: machi_dt:locator() + }). + +-record(append_opts, { + chunk_extra = 0 :: machi_dt:chunk_size(), + preferred_file_name :: 'undefined' | machi_dt:file_name_s(), + flag_fail_preferred = false :: boolean() + }). diff --git a/src/machi.proto b/src/machi.proto index 2645bde..462be0a 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -170,12 +170,15 @@ message Mpb_AuthResp { // High level API: append_chunk() request & response message Mpb_AppendChunkReq { - required string coc_namespace = 1; - required uint32 coc_locator = 2; + /* In single chain/non-clustered environment, use namespace="" */ + required string namespace = 1; required string prefix = 3; required bytes chunk = 4; required Mpb_ChunkCSum csum = 5; optional uint32 chunk_extra = 6; + optional string preferred_file_name = 7; + /* Fail the operation if our preferred file name is not available */ + optional uint32 flag_fail_preferred = 8; } message Mpb_AppendChunkResp { @@ -377,14 +380,17 @@ message Mpb_ProjectionV1 { // Low level API: append_chunk() message Mpb_LL_AppendChunkReq { - required Mpb_EpochID epoch_id = 1; - /* To avoid CoC use, use coc_namespace="" and coc_locator=0 */ - required string coc_namespace = 2; - required uint32 coc_locator = 3; - required string prefix = 4; - required bytes chunk = 5; - required Mpb_ChunkCSum csum = 6; - optional uint32 chunk_extra = 7; + required uint32 namespace_version = 1; + required string namespace = 2; + required uint32 locator = 3; + required Mpb_EpochID epoch_id = 4; + required string prefix = 5; + required bytes chunk = 6; + required Mpb_ChunkCSum csum = 7; + optional uint32 chunk_extra = 8; + optional string preferred_file_name = 9; + /* Fail the operation if our preferred file name is not available */ + optional uint32 flag_fail_preferred = 10; } message Mpb_LL_AppendChunkResp { diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index cec7c6a..c45823a 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -118,10 +118,8 @@ %% FLU1 API -export([ %% File API - append_chunk/3, append_chunk/4, - append_chunk/5, append_chunk/6, - append_chunk_extra/4, append_chunk_extra/5, - append_chunk_extra/6, append_chunk_extra/7, + append_chunk/5, + append_chunk/6, append_chunk/7, write_chunk/4, write_chunk/5, read_chunk/5, read_chunk/6, trim_chunk/4, trim_chunk/5, @@ -165,67 +163,27 @@ start_link(P_srvr_list, Opts) -> %% @doc Append a chunk (binary- or iolist-style) of data to a file %% with `Prefix'. -append_chunk(PidSpec, Prefix, Chunk) -> - append_chunk_extra(PidSpec, ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, 0, ?DEFAULT_TIMEOUT). +append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum) -> + append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum, #append_opts{}, ?DEFAULT_TIMEOUT). %% @doc Append a chunk (binary- or iolist-style) of data to a file %% with `Prefix'. -append_chunk(PidSpec, Prefix, Chunk, Timeout) -> - append_chunk_extra(PidSpec, ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, 0, Timeout). +append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum, #append_opts{}=Opts) -> + append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum, Opts, ?DEFAULT_TIMEOUT). %% @doc Append a chunk (binary- or iolist-style) of data to a file %% with `Prefix'. -append_chunk(PidSpec, CoC_Namespace, CoC_Locator, Prefix, Chunk) -> - append_chunk_extra(PidSpec, CoC_Namespace, CoC_Locator, - Prefix, Chunk, 0, ?DEFAULT_TIMEOUT). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk(PidSpec, CoC_Namespace, CoC_Locator, Prefix, Chunk, Timeout) -> - append_chunk_extra(PidSpec, CoC_Namespace, CoC_Locator, - Prefix, Chunk, 0, Timeout). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk_extra(PidSpec, Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - append_chunk_extra(PidSpec, Prefix, Chunk, ChunkExtra, ?DEFAULT_TIMEOUT). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk_extra(PidSpec, Prefix, Chunk, ChunkExtra, Timeout0) -> +append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum, #append_opts{}=Opts, Timeout0) -> + NSInfo2 = machi_util:ns_info_default(NSInfo), {TO, Timeout} = timeout(Timeout0), - gen_server:call(PidSpec, {req, {append_chunk_extra, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, - Chunk, ChunkExtra, TO}}, - Timeout). - -append_chunk_extra(PidSpec, CoC_Namespace, CoC_Locator, Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - append_chunk_extra(PidSpec, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra, ?DEFAULT_TIMEOUT). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk_extra(PidSpec, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra, Timeout0) -> - {TO, Timeout} = timeout(Timeout0), - gen_server:call(PidSpec, {req, {append_chunk_extra, - CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, TO}}, + gen_server:call(PidSpec, {req, {append_chunk, + NSInfo2, Prefix, Chunk, CSum, Opts, TO}}, Timeout). %% @doc Write a chunk of data (that has already been -%% allocated/sequenced by an earlier append_chunk_extra() call) to +%% allocated/sequenced by an earlier append_chunk() call) to %% `File' at `Offset'. write_chunk(PidSpec, File, Offset, Chunk) -> @@ -324,10 +282,10 @@ code_change(_OldVsn, S, _Extra) -> %%%%%%%%%%%%%%%%%%%%%%%%%%% -handle_call2({append_chunk_extra, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra, TO}, _From, S) -> - do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, 0, os:timestamp(), TO, S); +handle_call2({append_chunk, NSInfo, + Prefix, Chunk, CSum, Opts, TO}, _From, S) -> + do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, 0, os:timestamp(), TO, S); handle_call2({write_chunk, File, Offset, Chunk, TO}, _From, S) -> do_write_head(File, Offset, Chunk, 0, os:timestamp(), TO, S); handle_call2({read_chunk, File, Offset, Size, Opts, TO}, _From, S) -> @@ -339,12 +297,12 @@ handle_call2({checksum_list, File, TO}, _From, S) -> handle_call2({list_files, TO}, _From, S) -> do_list_files(0, os:timestamp(), TO, S). -do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, 0=Depth, STime, TO, S) -> - do_append_head2(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth + 1, STime, TO, S); -do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, STime, TO, #state{proj=P}=S) -> +do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, 0=Depth, STime, TO, S) -> + do_append_head2(NSInfo, Prefix, + Chunk, CSum, Opts, Depth + 1, STime, TO, S); +do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, #state{proj=P}=S) -> %% io:format(user, "head sleep1,", []), sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, @@ -359,62 +317,61 @@ do_append_head(CoC_Namespace, CoC_Locator, Prefix, case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth + 1, + do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, Depth + 1, STime, TO, S2); _ -> - do_append_head2(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth + 1, + do_append_head2(NSInfo, Prefix, + Chunk, CSum, Opts, Depth + 1, STime, TO, S2) end end. -do_append_head2(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, STime, TO, +do_append_head2(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, #state{proj=P}=S) -> [HeadFLU|_RestFLUs] = mutation_flus(P), case is_witness_flu(HeadFLU, P) of true -> case witnesses_use_our_epoch(S) of true -> - do_append_head3(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, + do_append_head3(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, S); false -> %% Bummer, go back to the beginning and retry. - do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, + do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, S) end; false -> - do_append_head3(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, STime, TO, S) + do_append_head3(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, S) end. -do_append_head3(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, STime, TO, +do_append_head3(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, #state{epoch_id=EpochID, proj=P, proxies_dict=PD}=S) -> [HeadFLU|RestFLUs] = non_witness_flus(mutation_flus(P), P), Proxy = orddict:fetch(HeadFLU, PD), - case ?FLU_PC:append_chunk_extra(Proxy, EpochID, - CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, ?TIMEOUT) of + case ?FLU_PC:append_chunk(Proxy, NSInfo, EpochID, + Prefix, Chunk, CSum, Opts, ?TIMEOUT) of {ok, {Offset, _Size, File}=_X} -> - do_append_midtail(RestFLUs, CoC_Namespace, CoC_Locator, Prefix, - File, Offset, Chunk, ChunkExtra, + do_append_midtail(RestFLUs, NSInfo, Prefix, + File, Offset, Chunk, CSum, Opts, [HeadFLU], 0, STime, TO, S); {error, bad_checksum}=BadCS -> {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, STime, TO, S); + do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, S); {error, written} -> %% Implicit sequencing + this error = we don't know where this %% written block is. But we lost a race. Repeat, with a new %% sequencer assignment. - do_append_head(CoC_Namespace, CoC_Locator, Prefix, - Chunk, ChunkExtra, Depth, STime, TO, S); + do_append_head(NSInfo, Prefix, + Chunk, CSum, Opts, Depth, STime, TO, S); {error, trimmed} = Err -> %% TODO: behaviour {reply, Err, S}; @@ -423,15 +380,15 @@ do_append_head3(CoC_Namespace, CoC_Locator, Prefix, Prefix,iolist_size(Chunk)}) end. -do_append_midtail(RestFLUs, CoC_Namespace, CoC_Locator, Prefix, - File, Offset, Chunk, ChunkExtra, +do_append_midtail(RestFLUs, NSInfo, Prefix, + File, Offset, Chunk, CSum, Opts, Ws, Depth, STime, TO, S) when RestFLUs == [] orelse Depth == 0 -> - do_append_midtail2(RestFLUs, CoC_Namespace, CoC_Locator, Prefix, - File, Offset, Chunk, ChunkExtra, + do_append_midtail2(RestFLUs, NSInfo, Prefix, + File, Offset, Chunk, CSum, Opts, Ws, Depth + 1, STime, TO, S); -do_append_midtail(_RestFLUs, CoC_Namespace, CoC_Locator, Prefix, File, - Offset, Chunk, ChunkExtra, +do_append_midtail(_RestFLUs, NSInfo, Prefix, File, + Offset, Chunk, CSum, Opts, Ws, Depth, STime, TO, #state{proj=P}=S) -> %% io:format(user, "midtail sleep2,", []), sleep_a_while(Depth), @@ -458,44 +415,44 @@ do_append_midtail(_RestFLUs, CoC_Namespace, CoC_Locator, Prefix, File, if Prefix == undefined -> % atom! not binary()!! {error, partition}; true -> - do_append_head2(CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra, + do_append_head2(NSInfo, + Prefix, Chunk, CSum, Opts, Depth, STime, TO, S2) end; RestFLUs3 -> do_append_midtail2(RestFLUs3, - CoC_Namespace, CoC_Locator, + NSInfo, Prefix, File, Offset, - Chunk, ChunkExtra, + Chunk, CSum, Opts, Ws, Depth + 1, STime, TO, S2) end end end. -do_append_midtail2([], _CoC_Namespace, _CoC_Locator, +do_append_midtail2([], _NSInfo, _Prefix, File, Offset, Chunk, - _ChunkExtra, _Ws, _Depth, _STime, _TO, S) -> + _CSum, _Opts, _Ws, _Depth, _STime, _TO, S) -> %% io:format(user, "ok!\n", []), {reply, {ok, {Offset, chunk_wrapper_size(Chunk), File}}, S}; -do_append_midtail2([FLU|RestFLUs]=FLUs, CoC_Namespace, CoC_Locator, +do_append_midtail2([FLU|RestFLUs]=FLUs, NSInfo, Prefix, File, Offset, Chunk, - ChunkExtra, Ws, Depth, STime, TO, + CSum, Opts, Ws, Depth, STime, TO, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(FLU, PD), case ?FLU_PC:write_chunk(Proxy, EpochID, File, Offset, Chunk, ?TIMEOUT) of ok -> %% io:format(user, "write ~w,", [FLU]), - do_append_midtail2(RestFLUs, CoC_Namespace, CoC_Locator, Prefix, + do_append_midtail2(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, - ChunkExtra, [FLU|Ws], Depth, STime, TO, S); + CSum, Opts, [FLU|Ws], Depth, STime, TO, S); {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_append_midtail(FLUs, CoC_Namespace, CoC_Locator, Prefix, + do_append_midtail(FLUs, NSInfo, Prefix, File, Offset, Chunk, - ChunkExtra, Ws, Depth, STime, TO, S); + CSum, Opts, Ws, Depth, STime, TO, S); {error, written} -> %% We know what the chunk ought to be, so jump to the %% middle of read-repair. @@ -559,9 +516,10 @@ do_write_head2(File, Offset, Chunk, Depth, STime, TO, ok -> %% From this point onward, we use the same code & logic path as %% append does. - do_append_midtail(RestFLUs, undefined, undefined, undefined, +NSInfo=todo,Prefix=todo,CSum=todo,Opts=todo, + do_append_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, - undefined, [HeadFLU], 0, STime, TO, S); + CSum, Opts, [HeadFLU], 0, STime, TO, S); {error, bad_checksum}=BadCS -> {reply, BadCS, S}; {error, Retry} diff --git a/src/machi_dt.erl b/src/machi_dt.erl index daf26dd..13e7836 100644 --- a/src/machi_dt.erl +++ b/src/machi_dt.erl @@ -20,8 +20,10 @@ -module(machi_dt). +-include("machi.hrl"). -include("machi_projection.hrl"). +-type append_opts() :: #append_opts{}. -type chunk() :: chunk_bin() | {chunk_csum(), chunk_bin()}. -type chunk_bin() :: binary() | iolist(). % client can use either -type chunk_csum() :: binary(). % 1 byte tag, N-1 bytes checksum @@ -29,9 +31,6 @@ -type chunk_s() :: 'trimmed' | binary(). -type chunk_pos() :: {file_offset(), chunk_size(), file_name_s()}. -type chunk_size() :: non_neg_integer(). --type coc_namespace() :: string(). --type coc_nl() :: {coc, coc_namespace(), coc_locator()}. --type coc_locator() :: non_neg_integer(). -type error_general() :: 'bad_arg' | 'wedged' | 'bad_checksum'. -type epoch_csum() :: binary(). -type epoch_num() :: -1 | non_neg_integer(). @@ -44,6 +43,10 @@ -type file_prefix() :: binary() | list(). -type inet_host() :: inet:ip_address() | inet:hostname(). -type inet_port() :: inet:port_number(). +-type locator() :: number(). +-type namespace() :: string(). +-type namespace_version() :: non_neg_integer(). +-type ns_info() :: #ns_info{}. -type projection() :: #projection_v1{}. -type projection_type() :: 'public' | 'private'. @@ -53,6 +56,7 @@ -type csum_tag() :: none | client_sha | server_sha | server_regen_sha. -export_type([ + append_opts/0, chunk/0, chunk_bin/0, chunk_csum/0, @@ -61,9 +65,6 @@ chunk_s/0, chunk_pos/0, chunk_size/0, - coc_namespace/0, - coc_nl/0, - coc_locator/0, error_general/0, epoch_csum/0, epoch_num/0, @@ -76,6 +77,9 @@ file_prefix/0, inet_host/0, inet_port/0, + namespace/0, + namespace_version/0, + ns_info/0, projection/0, projection_type/0 ]). diff --git a/src/machi_flu1_append_server.erl b/src/machi_flu1_append_server.erl index a7b029c..9a41776 100644 --- a/src/machi_flu1_append_server.erl +++ b/src/machi_flu1_append_server.erl @@ -82,25 +82,34 @@ init([Fluname, Witness_p, Wedged_p, EpochId]) -> {ok, #state{flu_name=Fluname, witness=Witness_p, wedged=Wedged_p, etstab=TID, epoch_id=EpochId}}. -handle_call({seq_append, _From2, _N, _L, _Prefix, _Chunk, _CSum, _Extra, _EpochID}, +handle_call({seq_append, _From2, _NSInfo, _EpochID, _Prefix, _Chunk, _TCSum, _Opts}, _From, #state{witness=true}=S) -> %% The FLU's machi_flu1_net_server process ought to filter all %% witness states, but we'll keep this clause for extra %% paranoia. {reply, witness, S}; -handle_call({seq_append, _From2, _N, _L, _Prefix, _Chunk, _CSum, _Extra, _EpochID}, +handle_call({seq_append, _From2, _NSInfo, _EpochID, _Prefix, _Chunk, _TCSum, _Opts}, _From, #state{wedged=true}=S) -> {reply, wedged, S}; -handle_call({seq_append, _From2, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum, Extra, EpochID}, +handle_call({seq_append, _From2, NSInfo, EpochID, + Prefix, Chunk, TCSum, Opts}, From, #state{flu_name=FluName, epoch_id=OldEpochId}=S) -> + %% io:format(user, " + %% HANDLE_CALL append_chunk + %% NSInfo=~p + %% epoch_id=~p + %% prefix=~p + %% chunk=~p + %% tcsum=~p + %% opts=~p\n", + %% [NSInfo, EpochID, Prefix, Chunk, TCSum, Opts]), %% Old is the one from our state, plain old 'EpochID' comes %% from the client. _ = case OldEpochId of EpochID -> spawn(fun() -> - append_server_dispatch(From, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum, Extra, + append_server_dispatch(From, NSInfo, + Prefix, Chunk, TCSum, Opts, FluName, EpochID) end), {noreply, S}; @@ -161,10 +170,10 @@ terminate(Reason, _S) -> code_change(_OldVsn, S, _Extra) -> {ok, S}. -append_server_dispatch(From, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum, Extra, FluName, EpochId) -> - Result = case handle_append(CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum, Extra, FluName, EpochId) of +append_server_dispatch(From, NSInfo, + Prefix, Chunk, TCSum, Opts, FluName, EpochId) -> + Result = case handle_append(NSInfo, + Prefix, Chunk, TCSum, Opts, FluName, EpochId) of {ok, File, Offset} -> {assignment, Offset, File}; Other -> @@ -173,19 +182,17 @@ append_server_dispatch(From, CoC_Namespace, CoC_Locator, _ = gen_server:reply(From, Result), ok. -handle_append(_N, _L, _Prefix, <<>>, _Csum, _Extra, _FluName, _EpochId) -> - {error, bad_arg}; -handle_append(CoC_Namespace, CoC_Locator, - Prefix, Chunk, Csum, Extra, FluName, EpochId) -> - CoC = {coc, CoC_Namespace, CoC_Locator}, +handle_append(NSInfo, + Prefix, Chunk, TCSum, Opts, FluName, EpochId) -> Res = machi_flu_filename_mgr:find_or_make_filename_from_prefix( - FluName, EpochId, {prefix, Prefix}, CoC), + FluName, EpochId, {prefix, Prefix}, NSInfo), case Res of {file, F} -> case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, F}) of {ok, Pid} -> - {Tag, CS} = machi_util:unmake_tagged_csum(Csum), + {Tag, CS} = machi_util:unmake_tagged_csum(TCSum), Meta = [{client_csum_tag, Tag}, {client_csum, CS}], + Extra = Opts#append_opts.chunk_extra, machi_file_proxy:append(Pid, Meta, Extra, Chunk); {error, trimmed} = E -> E diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index e5b65fc..10c833e 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -50,14 +50,13 @@ -include_lib("pulse_otp/include/pulse_otp.hrl"). -endif. --define(HARD_TIMEOUT, 2500). +-define(SHORT_TIMEOUT, 2500). +-define(LONG_TIMEOUT, (60*1000)). -export([ %% File API - append_chunk/4, append_chunk/5, append_chunk/6, append_chunk/7, - append_chunk_extra/5, append_chunk_extra/6, - append_chunk_extra/7, append_chunk_extra/8, + append_chunk/8, append_chunk/9, read_chunk/6, read_chunk/7, checksum_list/3, checksum_list/4, list_files/2, list_files/3, @@ -89,142 +88,61 @@ -type port_wrap() :: {w,atom(),term()}. -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - --spec append_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk()) -> +-spec append_chunk(port_wrap(), + machi_dt:ns_info(), machi_dt:epoch_id(), + machi_dt:file_prefix(), machi_dt:chunk(), + machi_dt:chunk_csum()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk(Sock, EpochID, Prefix, Chunk) -> - append_chunk2(Sock, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, 0). +append_chunk(Sock, NSInfo, EpochID, Prefix, Chunk, CSum) -> + append_chunk(Sock, NSInfo, EpochID, Prefix, Chunk, CSum, + #append_opts{}, ?LONG_TIMEOUT). %% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. +%% with `Prefix' and also request an additional `Extra' bytes. +%% +%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then +%% the file offsets that follow `Chunk''s position for the following 4K will +%% be reserved by the file sequencer for later write(s) by the +%% `write_chunk()' API. -spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk()) -> + machi_dt:ns_info(), machi_dt:epoch_id(), + machi_dt:file_prefix(), machi_dt:chunk(), + machi_dt:chunk_csum()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk(Host, TcpPort, EpochID, Prefix, Chunk) -> - Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), - try - append_chunk2(Sock, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, 0) - after - disconnect(Sock) - end. +append_chunk(Host, TcpPort, NSInfo, EpochID, Prefix, Chunk, CSum) -> + append_chunk(Host, TcpPort, NSInfo, EpochID, Prefix, Chunk, CSum, + #append_opts{}, ?LONG_TIMEOUT). + +-spec append_chunk(port_wrap(), + machi_dt:ns_info(), machi_dt:epoch_id(), + machi_dt:file_prefix(), machi_dt:chunk(), + machi_dt:chunk_csum(), machi_dt:append_opts(), timeout()) -> + {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. +append_chunk(Sock, NSInfo0, EpochID, Prefix, Chunk, CSum, Opts, Timeout) -> + NSInfo = machi_util:ns_info_default(NSInfo0), + append_chunk2(Sock, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, Timeout). %% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - --spec append_chunk(port_wrap(), machi_dt:epoch_id(), - machi_dt:coc_namespace(), machi_dt:coc_locator(), - machi_dt:file_prefix(), machi_dt:chunk()) -> - {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk(Sock, EpochID, CoC_Namespace, CoC_Locator, Prefix, Chunk) -> - append_chunk2(Sock, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, 0). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. +%% with `Prefix' and also request an additional `Extra' bytes. +%% +%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then +%% the file offsets that follow `Chunk''s position for the following 4K will +%% be reserved by the file sequencer for later write(s) by the +%% `write_chunk()' API. -spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:epoch_id(), - machi_dt:coc_namespace(), machi_dt:coc_locator(), - machi_dt:file_prefix(), machi_dt:chunk()) -> + machi_dt:ns_info(), machi_dt:epoch_id(), + machi_dt:file_prefix(), machi_dt:chunk(), + machi_dt:chunk_csum(), machi_dt:append_opts(), timeout()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk(Host, TcpPort, EpochID, CoC_Namespace, CoC_Locator, Prefix, Chunk) -> +append_chunk(Host, TcpPort, NSInfo0, EpochID, + Prefix, Chunk, CSum, Opts, Timeout) -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), try - append_chunk2(Sock, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, 0) - after - disconnect(Sock) - end. - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix' and also request an additional `Extra' bytes. -%% -%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then -%% the file offsets that follow `Chunk''s position for the following 4K will -%% be reserved by the file sequencer for later write(s) by the -%% `write_chunk()' API. - --spec append_chunk_extra(port_wrap(), machi_dt:epoch_id(), - machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_size()) -> - {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk_extra(Sock, EpochID, Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - append_chunk2(Sock, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, ChunkExtra). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix' and also request an additional `Extra' bytes. -%% -%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then -%% the file offsets that follow `Chunk''s position for the following 4K will -%% be reserved by the file sequencer for later write(s) by the -%% `write_chunk()' API. - --spec append_chunk_extra(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_size()) -> - {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk_extra(Host, TcpPort, EpochID, Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), - try - append_chunk2(Sock, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, ChunkExtra) - after - disconnect(Sock) - end. - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix' and also request an additional `Extra' bytes. -%% -%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then -%% the file offsets that follow `Chunk''s position for the following 4K will -%% be reserved by the file sequencer for later write(s) by the -%% `write_chunk()' API. - --spec append_chunk_extra(port_wrap(), machi_dt:epoch_id(), - machi_dt:coc_namespace(), machi_dt:coc_locator(), - machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_size()) -> - {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk_extra(Sock, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - append_chunk2(Sock, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix' and also request an additional `Extra' bytes. -%% -%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then -%% the file offsets that follow `Chunk''s position for the following 4K will -%% be reserved by the file sequencer for later write(s) by the -%% `write_chunk()' API. - --spec append_chunk_extra(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:epoch_id(), - machi_dt:coc_namespace(), machi_dt:coc_locator(), - machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_size()) -> - {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. -append_chunk_extra(Host, TcpPort, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), - try - append_chunk2(Sock, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra) + NSInfo = machi_util:ns_info_default(NSInfo0), + append_chunk2(Sock, NSInfo, EpochID, + Prefix, Chunk, CSum, Opts, Timeout) after disconnect(Sock) end. @@ -628,23 +546,24 @@ read_chunk2(Sock, EpochID, File0, Offset, Size, Opts) -> {low_read_chunk, EpochID, File, Offset, Size, Opts}), do_pb_request_common(Sock, ReqID, Req). -append_chunk2(Sock, EpochID, CoC_Namespace, CoC_Locator, - Prefix0, Chunk0, ChunkExtra) -> +append_chunk2(Sock, NSInfo, EpochID, + Prefix0, Chunk, CSum0, Opts, Timeout) -> ReqID = <<"id">>, - {Chunk, CSum_tag, CSum} = - case Chunk0 of - X when is_binary(X) -> - {Chunk0, ?CSUM_TAG_NONE, <<>>}; - {ChunkCSum, Chk} -> - {Tag, CS} = machi_util:unmake_tagged_csum(ChunkCSum), - {Chk, Tag, CS} - end, Prefix = machi_util:make_binary(Prefix0), + {CSum_tag, CSum} = case CSum0 of + <<>> -> + {?CSUM_TAG_NONE, <<>>}; + {_Tag, _CS} -> + CSum0; + B when is_binary(B) -> + machi_util:unmake_tagged_csum(CSum0) + end, + #ns_info{version=NSVersion, name=NS, locator=NSLocator} = NSInfo, Req = machi_pb_translate:to_pb_request( ReqID, - {low_append_chunk, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum_tag, CSum, ChunkExtra}), - do_pb_request_common(Sock, ReqID, Req). + {low_append_chunk, NSVersion, NS, NSLocator, EpochID, + Prefix, Chunk, CSum_tag, CSum, Opts}), + do_pb_request_common(Sock, ReqID, Req, true, Timeout). write_chunk2(Sock, EpochID, File0, Offset, Chunk0) -> ReqID = <<"id">>, @@ -739,18 +658,18 @@ kick_projection_reaction2(Sock, _Options) -> ReqID = <<42>>, Req = machi_pb_translate:to_pb_request( ReqID, {low_proj, {kick_projection_reaction}}), - do_pb_request_common(Sock, ReqID, Req, false). + do_pb_request_common(Sock, ReqID, Req, false, ?LONG_TIMEOUT). do_pb_request_common(Sock, ReqID, Req) -> - do_pb_request_common(Sock, ReqID, Req, true). + do_pb_request_common(Sock, ReqID, Req, true, ?LONG_TIMEOUT). -do_pb_request_common(Sock, ReqID, Req, GetReply_p) -> +do_pb_request_common(Sock, ReqID, Req, GetReply_p, Timeout) -> erase(bad_sock), try ReqBin = list_to_binary(machi_pb:encode_mpb_ll_request(Req)), ok = w_send(Sock, ReqBin), if GetReply_p -> - case w_recv(Sock, 0) of + case w_recv(Sock, 0, Timeout) of {ok, RespBin} -> Resp = machi_pb:decode_mpb_ll_response(RespBin), {ReqID2, Reply} = machi_pb_translate:from_pb_response(Resp), @@ -796,7 +715,7 @@ w_connect(#p_srvr{proto_mod=?MODULE, address=Host, port=Port, props=Props}=_P)-> case proplists:get_value(session_proto, Props, tcp) of tcp -> put(xxx, goofus), - Sock = machi_util:connect(Host, Port, ?HARD_TIMEOUT), + Sock = machi_util:connect(Host, Port, ?SHORT_TIMEOUT), put(xxx, Sock), ok = inet:setopts(Sock, ?PB_PACKET_OPTS), {w,tcp,Sock}; @@ -820,8 +739,8 @@ w_close({w,tcp,Sock}) -> catch gen_tcp:close(Sock), ok. -w_recv({w,tcp,Sock}, Amt) -> - gen_tcp:recv(Sock, Amt, ?HARD_TIMEOUT). +w_recv({w,tcp,Sock}, Amt, Timeout) -> + gen_tcp:recv(Sock, Amt, Timeout). w_send({w,tcp,Sock}, IoData) -> gen_tcp:send(Sock, IoData). diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 6610230..588b8d8 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -264,13 +264,24 @@ do_pb_ll_request3({low_proj, PCMD}, S) -> {do_server_proj_request(PCMD, S), S}; %% Witness status *matters* below -do_pb_ll_request3({low_append_chunk, _EpochID, CoC_Namespace, CoC_Locator, +do_pb_ll_request3({low_append_chunk, NSVersion, NS, NSLocator, EpochID, Prefix, Chunk, CSum_tag, - CSum, ChunkExtra}, + CSum, Opts}, #state{witness=false}=S) -> - {do_server_append_chunk(CoC_Namespace, CoC_Locator, + %% io:format(user, " + %% append_chunk namespace_version=~p + %% namespace=~p + %% locator=~p + %% epoch_id=~p + %% prefix=~p + %% chunk=~p + %% csum={~p,~p} + %% opts=~p\n", + %% [NSVersion, NS, NSLocator, EpochID, Prefix, Chunk, CSum_tag, CSum, Opts]), + NSInfo = #ns_info{version=NSVersion, name=NS, locator=NSLocator}, + {do_server_append_chunk(NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, - ChunkExtra, S), S}; + Opts, S), S}; do_pb_ll_request3({low_write_chunk, _EpochID, File, Offset, Chunk, CSum_tag, CSum}, #state{witness=false}=S) -> @@ -334,27 +345,27 @@ do_server_proj_request({kick_projection_reaction}, end), async_no_response. -do_server_append_chunk(CoC_Namespace, CoC_Locator, +do_server_append_chunk(NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, - ChunkExtra, S) -> + Opts, S) -> case sanitize_prefix(Prefix) of ok -> - do_server_append_chunk2(CoC_Namespace, CoC_Locator, + do_server_append_chunk2(NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, - ChunkExtra, S); + Opts, S); _ -> {error, bad_arg} end. -do_server_append_chunk2(CoC_Namespace, CoC_Locator, +do_server_append_chunk2(NSInfo, EpochID, Prefix, Chunk, CSum_tag, Client_CSum, - ChunkExtra, #state{flu_name=FluName, - epoch_id=EpochID}=_S) -> + Opts, #state{flu_name=FluName, + epoch_id=EpochID}=_S) -> %% TODO: Do anything with PKey? try TaggedCSum = check_or_make_tagged_checksum(CSum_tag, Client_CSum,Chunk), - R = {seq_append, self(), CoC_Namespace, CoC_Locator, - Prefix, Chunk, TaggedCSum, ChunkExtra, EpochID}, + R = {seq_append, self(), NSInfo, EpochID, + Prefix, Chunk, TaggedCSum, Opts}, case gen_server:call(FluName, R, 10*1000) of {assignment, Offset, File} -> Size = iolist_size(Chunk), @@ -563,13 +574,11 @@ do_pb_hl_request2({high_echo, Msg}, S) -> {Msg, S}; do_pb_hl_request2({high_auth, _User, _Pass}, S) -> {-77, S}; -do_pb_hl_request2({high_append_chunk, CoC_Namespace, CoC_Locator, - Prefix, ChunkBin, TaggedCSum, - ChunkExtra}, #state{high_clnt=Clnt}=S) -> - Chunk = {TaggedCSum, ChunkBin}, - Res = machi_cr_client:append_chunk_extra(Clnt, CoC_Namespace, CoC_Locator, - Prefix, Chunk, - ChunkExtra), +do_pb_hl_request2({high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}, + #state{high_clnt=Clnt}=S) -> + NSInfo = #ns_info{name=NS}, % TODO populate other fields + Res = machi_cr_client:append_chunk(Clnt, NSInfo, + Prefix, Chunk, TaggedCSum, Opts), {Res, S}; do_pb_hl_request2({high_write_chunk, File, Offset, ChunkBin, TaggedCSum}, #state{high_clnt=Clnt}=S) -> diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 293fdc3..7140266 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -67,6 +67,7 @@ ]). -define(TIMEOUT, 10 * 1000). +-include("machi.hrl"). %% included for #ns_info record -include("machi_projection.hrl"). %% included for pv1_epoch type -record(state, {fluname :: atom(), @@ -90,28 +91,28 @@ start_link(FluName, DataDir) when is_atom(FluName) andalso is_list(DataDir) -> -spec find_or_make_filename_from_prefix( FluName :: atom(), EpochId :: pv1_epoch(), Prefix :: {prefix, string()}, - machi_dt:coc_nl()) -> + machi_dt:ns_info()) -> {file, Filename :: string()} | {error, Reason :: term() } | timeout. % @doc Find the latest available or make a filename from a prefix. A prefix % should be in the form of a tagged tuple `{prefix, P}'. Returns a tagged % tuple in the form of `{file, F}' or an `{error, Reason}' find_or_make_filename_from_prefix(FluName, EpochId, {prefix, Prefix}, - {coc, _CoC_Ns, _CoC_Loc}=CoC_NL) + #ns_info{}=NSInfo) when is_atom(FluName) -> N = make_filename_mgr_name(FluName), - gen_server:call(N, {find_filename, EpochId, CoC_NL, Prefix}, ?TIMEOUT); + gen_server:call(N, {find_filename, EpochId, NSInfo, Prefix}, ?TIMEOUT); find_or_make_filename_from_prefix(_FluName, _EpochId, Other, Other2) -> - lager:error("~p is not a valid prefix/CoC ~p", [Other, Other2]), + lager:error("~p is not a valid prefix/locator ~p", [Other, Other2]), error(badarg). --spec increment_prefix_sequence( FluName :: atom(), CoC_NL :: machi_dt:coc_nl(), Prefix :: {prefix, string()} ) -> +-spec increment_prefix_sequence( FluName :: atom(), NSInfo :: machi_dt:ns_info(), Prefix :: {prefix, string()} ) -> ok | {error, Reason :: term() } | timeout. % @doc Increment the sequence counter for a given prefix. Prefix should % be in the form of `{prefix, P}'. -increment_prefix_sequence(FluName, {coc,_CoC_Namespace,_CoC_Locator}=CoC_NL, {prefix, Prefix}) when is_atom(FluName) -> - gen_server:call(make_filename_mgr_name(FluName), {increment_sequence, CoC_NL, Prefix}, ?TIMEOUT); -increment_prefix_sequence(_FluName, _CoC_NL, Other) -> +increment_prefix_sequence(FluName, #ns_info{}=NSInfo, {prefix, Prefix}) when is_atom(FluName) -> + gen_server:call(make_filename_mgr_name(FluName), {increment_sequence, NSInfo, Prefix}, ?TIMEOUT); +increment_prefix_sequence(_FluName, _NSInfo, Other) -> lager:error("~p is not a valid prefix.", [Other]), error(badarg). @@ -142,23 +143,23 @@ handle_cast(Req, State) -> %% the FLU has already validated that the caller's epoch id and the FLU's epoch id %% are the same. So we *assume* that remains the case here - that is to say, we %% are not wedged. -handle_call({find_filename, EpochId, CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir, +handle_call({find_filename, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, epoch = EpochId, tid = Tid }) -> %% Our state and the caller's epoch ids are the same. Business as usual. - File = handle_find_file(Tid, CoC_NL, Prefix, DataDir), + File = handle_find_file(Tid, NSInfo, Prefix, DataDir), {reply, {file, File}, S}; -handle_call({find_filename, EpochId, CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> +handle_call({find_filename, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> %% If the epoch id in our state and the caller's epoch id were the same, it would've %% matched the above clause. Since we're here, we know that they are different. %% If epoch ids between our state and the caller's are different, we must increment the %% sequence number, generate a filename and then cache it. - File = increment_and_cache_filename(Tid, DataDir, CoC_NL, Prefix), + File = increment_and_cache_filename(Tid, DataDir, NSInfo, Prefix), {reply, {file, File}, S#state{epoch = EpochId}}; -handle_call({increment_sequence, {coc,CoC_Namespace,CoC_Locator}=_CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir }) -> - ok = machi_util:increment_max_filenum(DataDir, CoC_Namespace,CoC_Locator, Prefix), +handle_call({increment_sequence, #ns_info{name=NS, locator=NSLocator}, Prefix}, _From, S = #state{ datadir = DataDir }) -> + ok = machi_util:increment_max_filenum(DataDir, NS, NSLocator, Prefix), {reply, ok, S}; handle_call({list_files, Prefix}, From, S = #state{ datadir = DataDir }) -> spawn(fun() -> @@ -191,9 +192,9 @@ generate_uuid_v4_str() -> io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b", [A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]). -find_file(DataDir, {coc,CoC_Namespace,CoC_Locator}=_CoC_NL, Prefix, N) -> +find_file(DataDir, #ns_info{name=NS, locator=NSLocator}=_NSInfo, Prefix, N) -> {_Filename, Path} = machi_util:make_data_filename(DataDir, - CoC_Namespace,CoC_Locator, + NS, NSLocator, Prefix, "*", N), filelib:wildcard(Path). @@ -204,11 +205,11 @@ list_files(DataDir, Prefix) -> make_filename_mgr_name(FluName) when is_atom(FluName) -> list_to_atom(atom_to_list(FluName) ++ "_filename_mgr"). -handle_find_file(Tid, {coc,CoC_Namespace,CoC_Locator}=CoC_NL, Prefix, DataDir) -> - N = machi_util:read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), - {File, Cleanup} = case find_file(DataDir, CoC_NL, Prefix, N) of +handle_find_file(Tid, #ns_info{name=NS, locator=NSLocator}=NSInfo, Prefix, DataDir) -> + N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), + {File, Cleanup} = case find_file(DataDir, NSInfo, Prefix, N) of [] -> - {find_or_make_filename(Tid, DataDir, CoC_Namespace, CoC_Locator, Prefix, N), false}; + {find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N), false}; [H] -> {H, true}; [Fn | _ ] = L -> lager:debug( @@ -216,23 +217,23 @@ handle_find_file(Tid, {coc,CoC_Namespace,CoC_Locator}=CoC_NL, Prefix, DataDir) - [Prefix, N, L]), {Fn, true} end, - maybe_cleanup(Tid, {CoC_Namespace, CoC_Locator, Prefix, N}, Cleanup), + maybe_cleanup(Tid, {NS, NSLocator, Prefix, N}, Cleanup), filename:basename(File). -find_or_make_filename(Tid, DataDir, CoC_Namespace, CoC_Locator, Prefix, N) -> - case ets:lookup(Tid, {CoC_Namespace, CoC_Locator, Prefix, N}) of +find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N) -> + case ets:lookup(Tid, {NS, NSLocator, Prefix, N}) of [] -> - F = generate_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, N), - true = ets:insert_new(Tid, {{CoC_Namespace, CoC_Locator, Prefix, N}, F}), + F = generate_filename(DataDir, NS, NSLocator, Prefix, N), + true = ets:insert_new(Tid, {{NS, NSLocator, Prefix, N}, F}), F; [{_Key, File}] -> File end. -generate_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, N) -> +generate_filename(DataDir, NS, NSLocator, Prefix, N) -> {F, _} = machi_util:make_data_filename( DataDir, - CoC_Namespace, CoC_Locator, Prefix, + NS, NSLocator, Prefix, generate_uuid_v4_str(), N), binary_to_list(F). @@ -242,11 +243,11 @@ maybe_cleanup(_Tid, _Key, false) -> maybe_cleanup(Tid, Key, true) -> true = ets:delete(Tid, Key). -increment_and_cache_filename(Tid, DataDir, {coc,CoC_Namespace,CoC_Locator}, Prefix) -> - ok = machi_util:increment_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), - N = machi_util:read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), - F = generate_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, N), - true = ets:insert_new(Tid, {{CoC_Namespace, CoC_Locator, Prefix, N}, F}), +increment_and_cache_filename(Tid, DataDir, #ns_info{name=NS,locator=NSLocator}, Prefix) -> + ok = machi_util:increment_max_filenum(DataDir, NS, NSLocator, Prefix), + N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), + F = generate_filename(DataDir, NS, NSLocator, Prefix, N), + true = ets:insert_new(Tid, {{NS, NSLocator, Prefix, N}, F}), filename:basename(F). diff --git a/src/machi_pb_high_client.erl b/src/machi_pb_high_client.erl index 5b2ab22..9c69358 100644 --- a/src/machi_pb_high_client.erl +++ b/src/machi_pb_high_client.erl @@ -38,7 +38,7 @@ connected_p/1, echo/2, echo/3, auth/3, auth/4, - append_chunk/7, append_chunk/8, + append_chunk/6, append_chunk/7, write_chunk/5, write_chunk/6, read_chunk/5, read_chunk/6, trim_chunk/4, trim_chunk/5, @@ -96,30 +96,33 @@ auth(PidSpec, User, Pass) -> auth(PidSpec, User, Pass, Timeout) -> send_sync(PidSpec, {auth, User, Pass}, Timeout). --spec append_chunk(pid(), CoC_namespace::binary(), CoC_locator::integer(), Prefix::binary(), Chunk::binary(), - CSum::binary(), ChunkExtra::non_neg_integer()) -> +-spec append_chunk(pid(), + NS::machi_dt:namespace(), Prefix::machi_dt:file_prefix(), + Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum(), + Opts::machi_dt:append_opts()) -> {ok, Filename::string(), Offset::machi_dt:file_offset()} | {error, machi_client_error_reason()}. -append_chunk(PidSpec, CoC_namespace, CoC_locator, Prefix, Chunk, CSum, ChunkExtra) -> - append_chunk(PidSpec, CoC_namespace, CoC_locator, Prefix, Chunk, CSum, ChunkExtra, ?DEFAULT_TIMEOUT). +append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts) -> + append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts, ?DEFAULT_TIMEOUT). --spec append_chunk(pid(), CoC_namespace::binary(), CoC_locator::integer(), Prefix::binary(), - Chunk::binary(), CSum::binary(), - ChunkExtra::non_neg_integer(), +-spec append_chunk(pid(), + NS::machi_dt:namespace(), Prefix::machi_dt:file_prefix(), + Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum(), + Opts::machi_dt:append_opts(), Timeout::non_neg_integer()) -> {ok, Filename::string(), Offset::machi_dt:file_offset()} | {error, machi_client_error_reason()}. -append_chunk(PidSpec, CoC_namespace, CoC_locator, Prefix, Chunk, CSum, ChunkExtra, Timeout) -> - send_sync(PidSpec, {append_chunk, CoC_namespace, CoC_locator, Prefix, Chunk, CSum, ChunkExtra}, Timeout). +append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts, Timeout) -> + send_sync(PidSpec, {append_chunk, NS, Prefix, Chunk, CSum, Opts}, Timeout). -spec write_chunk(pid(), File::string(), machi_dt:file_offset(), - Chunk::binary(), CSum::binary()) -> + Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum()) -> ok | {error, machi_client_error_reason()}. write_chunk(PidSpec, File, Offset, Chunk, CSum) -> write_chunk(PidSpec, File, Offset, Chunk, CSum, ?DEFAULT_TIMEOUT). -spec write_chunk(pid(), File::string(), machi_dt:file_offset(), - Chunk::binary(), CSum::binary(), Timeout::non_neg_integer()) -> + Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum(), Timeout::non_neg_integer()) -> ok | {error, machi_client_error_reason()}. write_chunk(PidSpec, File, Offset, Chunk, CSum, Timeout) -> send_sync(PidSpec, {write_chunk, File, Offset, Chunk, CSum}, Timeout). @@ -281,18 +284,19 @@ do_send_sync2({auth, User, Pass}, #state{sock=Sock}=S) -> Res = {bummer, {X, Y, erlang:get_stacktrace()}}, {Res, S} end; -do_send_sync2({append_chunk, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum, ChunkExtra}, +do_send_sync2({append_chunk, NS, Prefix, Chunk, CSum, Opts}, #state{sock=Sock, sock_id=Index, count=Count}=S) -> try ReqID = <>, CSumT = convert_csum_req(CSum, Chunk), - Req = #mpb_appendchunkreq{coc_namespace=CoC_Namespace, - coc_locator=CoC_Locator, + {ChunkExtra, Pref, FailPref} = machi_pb_translate:conv_from_append_opts(Opts), + Req = #mpb_appendchunkreq{namespace=NS, prefix=Prefix, chunk=Chunk, csum=CSumT, - chunk_extra=ChunkExtra}, + chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=FailPref}, R1a = #mpb_request{req_id=ReqID, do_not_alter=1, append_chunk=Req}, Bin1a = machi_pb:encode_mpb_request(R1a), @@ -436,9 +440,15 @@ do_send_sync2({list_files}, {Res, S#state{count=Count+1}} end. +%% We only convert the checksum types that make sense here: +%% none or client_sha. None of the other types should be sent +%% to us via the PB high protocol. + convert_csum_req(none, Chunk) -> #mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA', csum=machi_util:checksum_chunk(Chunk)}; +convert_csum_req(<<>>, Chunk) -> + convert_csum_req(none, Chunk); convert_csum_req({client_sha, CSumBin}, _Chunk) -> #mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA', csum=CSumBin}. diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index cc8f728..c615912 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -34,7 +34,9 @@ -export([from_pb_request/1, from_pb_response/1, to_pb_request/2, - to_pb_response/3 + to_pb_response/3, + conv_from_append_opts/1, + conv_to_append_opts/1 ]). %% TODO: fixme cleanup @@ -50,19 +52,19 @@ from_pb_request(#mpb_ll_request{ {ReqID, {low_auth, undefined, User, Pass}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, - append_chunk=#mpb_ll_appendchunkreq{ + append_chunk=IR=#mpb_ll_appendchunkreq{ + namespace_version=NSVersion, + namespace=NS, + locator=NSLocator, epoch_id=PB_EpochID, - coc_namespace=CoC_Namespace, - coc_locator=CoC_Locator, prefix=Prefix, chunk=Chunk, - csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}, - chunk_extra=ChunkExtra}}) -> + csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}) -> EpochID = conv_to_epoch_id(PB_EpochID), CSum_tag = conv_to_csum_tag(CSum_type), - {ReqID, {low_append_chunk, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum_tag, CSum, - ChunkExtra}}; + Opts = conv_to_append_opts(IR), + {ReqID, {low_append_chunk, NSVersion, NS, NSLocator, EpochID, + Prefix, Chunk, CSum_tag, CSum, Opts}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, write_chunk=#mpb_ll_writechunkreq{ @@ -172,15 +174,13 @@ from_pb_request(#mpb_request{req_id=ReqID, {ReqID, {high_auth, User, Pass}}; from_pb_request(#mpb_request{req_id=ReqID, append_chunk=IR=#mpb_appendchunkreq{}}) -> - #mpb_appendchunkreq{coc_namespace=CoC_namespace, - coc_locator=CoC_locator, + #mpb_appendchunkreq{namespace=NS, prefix=Prefix, chunk=Chunk, - csum=CSum, - chunk_extra=ChunkExtra} = IR, + csum=CSum} = IR, TaggedCSum = make_tagged_csum(CSum, Chunk), - {ReqID, {high_append_chunk, CoC_namespace, CoC_locator, Prefix, Chunk, - TaggedCSum, ChunkExtra}}; + Opts = conv_to_append_opts(IR), + {ReqID, {high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}}; from_pb_request(#mpb_request{req_id=ReqID, write_chunk=IR=#mpb_writechunkreq{}}) -> #mpb_writechunkreq{chunk=#mpb_chunk{file_name=File, @@ -391,20 +391,24 @@ to_pb_request(ReqID, {low_echo, _BogusEpochID, Msg}) -> to_pb_request(ReqID, {low_auth, _BogusEpochID, User, Pass}) -> #mpb_ll_request{req_id=ReqID, do_not_alter=2, auth=#mpb_authreq{user=User, password=Pass}}; -to_pb_request(ReqID, {low_append_chunk, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, CSum_tag, CSum, ChunkExtra}) -> +to_pb_request(ReqID, {low_append_chunk, NSVersion, NS, NSLocator, EpochID, + Prefix, Chunk, CSum_tag, CSum, Opts}) -> PB_EpochID = conv_from_epoch_id(EpochID), CSum_type = conv_from_csum_tag(CSum_tag), PB_CSum = #mpb_chunkcsum{type=CSum_type, csum=CSum}, + {ChunkExtra, Pref, FailPref} = conv_from_append_opts(Opts), #mpb_ll_request{req_id=ReqID, do_not_alter=2, append_chunk=#mpb_ll_appendchunkreq{ + namespace_version=NSVersion, + namespace=NS, + locator=NSLocator, epoch_id=PB_EpochID, - coc_namespace=CoC_Namespace, - coc_locator=CoC_Locator, prefix=Prefix, chunk=Chunk, csum=PB_CSum, - chunk_extra=ChunkExtra}}; + chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=FailPref}}; to_pb_request(ReqID, {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}) -> PB_EpochID = conv_from_epoch_id(EpochID), CSum_type = conv_from_csum_tag(CSum_tag), @@ -504,7 +508,7 @@ to_pb_response(ReqID, {low_auth, _, _, _}, __TODO_Resp) -> #mpb_ll_response{req_id=ReqID, generic=#mpb_errorresp{code=1, msg="AUTH not implemented"}}; -to_pb_response(ReqID, {low_append_chunk, _EID, _N, _L, _Pfx, _Ch, _CST, _CS, _CE}, Resp)-> +to_pb_response(ReqID, {low_append_chunk, _NSV, _NS, _NSL, _EID, _Pfx, _Ch, _CST, _CS, _O}, Resp)-> case Resp of {ok, {Offset, Size, File}} -> Where = #mpb_chunkpos{offset=Offset, @@ -691,7 +695,7 @@ to_pb_response(ReqID, {high_auth, _User, _Pass}, _Resp) -> #mpb_response{req_id=ReqID, generic=#mpb_errorresp{code=1, msg="AUTH not implemented"}}; -to_pb_response(ReqID, {high_append_chunk, _CoC_n, _CoC_l, _Prefix, _Chunk, _TSum, _CE}, Resp)-> +to_pb_response(ReqID, {high_append_chunk, _NS, _Prefix, _Chunk, _TSum, _O}, Resp)-> case Resp of {ok, {Offset, Size, File}} -> Where = #mpb_chunkpos{offset=Offset, @@ -974,6 +978,27 @@ conv_from_boolean(false) -> conv_from_boolean(true) -> 1. +conv_from_append_opts(#append_opts{chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=FailPref}) -> + {ChunkExtra, Pref, conv_from_boolean(FailPref)}. + + +conv_to_append_opts(#mpb_appendchunkreq{ + chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=FailPref}) -> + #append_opts{chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=conv_to_boolean(FailPref)}; +conv_to_append_opts(#mpb_ll_appendchunkreq{ + chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=FailPref}) -> + #append_opts{chunk_extra=ChunkExtra, + preferred_file_name=Pref, + flag_fail_preferred=conv_to_boolean(FailPref)}. + conv_from_projection_v1(#projection_v1{epoch_number=Epoch, epoch_csum=CSum, author_server=Author, diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index e4bc0d2..947a307 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -57,10 +57,7 @@ %% FLU1 API -export([ %% File API - append_chunk/4, append_chunk/5, - append_chunk/6, append_chunk/7, - append_chunk_extra/5, append_chunk_extra/6, - append_chunk_extra/7, append_chunk_extra/8, + append_chunk/6, append_chunk/8, read_chunk/6, read_chunk/7, checksum_list/3, checksum_list/4, list_files/2, list_files/3, @@ -106,58 +103,17 @@ start_link(#p_srvr{}=I) -> %% @doc Append a chunk (binary- or iolist-style) of data to a file %% with `Prefix'. -append_chunk(PidSpec, EpochID, Prefix, Chunk) -> - append_chunk(PidSpec, EpochID, Prefix, Chunk, infinity). +append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum) -> + append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum, + #append_opts{}, infinity). %% @doc Append a chunk (binary- or iolist-style) of data to a file %% with `Prefix'. -append_chunk(PidSpec, EpochID, Prefix, Chunk, Timeout) -> - append_chunk_extra(PidSpec, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, 0, Timeout). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk(PidSpec, EpochID, CoC_Namespace, CoC_Locator, Prefix, Chunk) -> - append_chunk(PidSpec, EpochID, CoC_Namespace, CoC_Locator, Prefix, Chunk, infinity). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk(PidSpec, EpochID, CoC_Namespace, CoC_Locator, Prefix, Chunk, Timeout) -> - append_chunk_extra(PidSpec, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, 0, Timeout). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk_extra(PidSpec, EpochID, Prefix, Chunk, ChunkExtra) - when is_integer(ChunkExtra), ChunkExtra >= 0 -> - append_chunk_extra(PidSpec, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, ChunkExtra, infinity). - -%% @doc Append a chunk (binary- or iolist-style) of data to a file -%% with `Prefix'. - -append_chunk_extra(PidSpec, EpochID, Prefix, Chunk, ChunkExtra, Timeout) -> - append_chunk_extra(PidSpec, EpochID, - ?DEFAULT_COC_NAMESPACE, ?DEFAULT_COC_LOCATOR, - Prefix, Chunk, ChunkExtra, Timeout). - -append_chunk_extra(PidSpec, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra) -> - append_chunk_extra(PidSpec, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra, infinity). - -append_chunk_extra(PidSpec, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra, Timeout) -> - gen_server:call(PidSpec, {req, {append_chunk_extra, EpochID, - CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra}}, +append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, + Timeout) -> + gen_server:call(PidSpec, {req, {append_chunk, NSInfo, EpochID, + Prefix, Chunk, CSum, Opts, Timeout}}, Timeout). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. @@ -415,11 +371,11 @@ do_req_retry(_Req, 2, Err, S) -> do_req_retry(Req, Depth, _Err, S) -> do_req(Req, Depth + 1, try_connect(disconnect(S))). -make_req_fun({append_chunk_extra, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra}, +make_req_fun({append_chunk, NSInfo, EpochID, + Prefix, Chunk, CSum, Opts, Timeout}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> - fun() -> Mod:append_chunk_extra(Sock, EpochID, CoC_Namespace, CoC_Locator, - Prefix, Chunk, ChunkExtra) + fun() -> Mod:append_chunk(Sock, NSInfo, EpochID, + Prefix, Chunk, CSum, Opts, Timeout) end; make_req_fun({read_chunk, EpochID, File, Offset, Size, Opts}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> diff --git a/src/machi_util.erl b/src/machi_util.erl index aa5f070..bc885aa 100644 --- a/src/machi_util.erl +++ b/src/machi_util.erl @@ -49,7 +49,8 @@ %% Other wait_for_death/2, wait_for_life/2, bool2int/1, - int2bool/1 + int2bool/1, + ns_info_default/1 ]). -include("machi.hrl"). @@ -68,12 +69,12 @@ make_regname(Prefix) when is_list(Prefix) -> %% @doc Calculate a config file path, by common convention. --spec make_config_filename(string(), machi_dt:coc_namespace(), machi_dt:coc_locator(), string()) -> +-spec make_config_filename(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> string(). -make_config_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix) -> - Locator_str = int_to_hexstr(CoC_Locator, 32), +make_config_filename(DataDir, NS, Locator, Prefix) -> + Locator_str = int_to_hexstr(Locator, 32), lists:flatten(io_lib:format("~s/config/~s^~s^~s", - [DataDir, Prefix, CoC_Namespace, Locator_str])). + [DataDir, Prefix, NS, Locator_str])). %% @doc Calculate a config file path, by common convention. @@ -102,19 +103,19 @@ make_checksum_filename(DataDir, FileName) -> %% @doc Calculate a file data file path, by common convention. --spec make_data_filename(string(), machi_dt:coc_namespace(), machi_dt:coc_locator(), string(), atom()|string()|binary(), integer()|string()) -> +-spec make_data_filename(string(), machi_dt:namespace(), machi_dt:locator(), string(), atom()|string()|binary(), integer()|string()) -> {binary(), string()}. -make_data_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, SequencerName, FileNum) +make_data_filename(DataDir, NS, Locator, Prefix, SequencerName, FileNum) when is_integer(FileNum) -> - Locator_str = int_to_hexstr(CoC_Locator, 32), + Locator_str = int_to_hexstr(Locator, 32), File = erlang:iolist_to_binary(io_lib:format("~s^~s^~s^~s^~w", - [Prefix, CoC_Namespace, Locator_str, SequencerName, FileNum])), + [Prefix, NS, Locator_str, SequencerName, FileNum])), make_data_filename2(DataDir, File); -make_data_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, SequencerName, String) +make_data_filename(DataDir, NS, Locator, Prefix, SequencerName, String) when is_list(String) -> - Locator_str = int_to_hexstr(CoC_Locator, 32), + Locator_str = int_to_hexstr(Locator, 32), File = erlang:iolist_to_binary(io_lib:format("~s^~s^~s^~s^~s", - [Prefix, CoC_Namespace, Locator_str, SequencerName, string])), + [Prefix, NS, Locator_str, SequencerName, string])), make_data_filename2(DataDir, File). make_data_filename2(DataDir, File) -> @@ -161,7 +162,7 @@ is_valid_filename(Filename) -> %% %% %% Invalid filenames will return an empty list. --spec parse_filename( Filename :: string() ) -> {} | {string(), machi_dt:coc_namespace(), machi_dt:coc_locator(), string(), string() }. +-spec parse_filename( Filename :: string() ) -> {} | {string(), machi_dt:namespace(), machi_dt:locator(), string(), string() }. parse_filename(Filename) -> case string:tokens(Filename, "^") of [Prefix, CoC_NS, CoC_Loc, UUID, SeqNo] -> @@ -181,10 +182,10 @@ parse_filename(Filename) -> %% @doc Read the file size of a config file, which is used as the %% basis for a minimum sequence number. --spec read_max_filenum(string(), machi_dt:coc_namespace(), machi_dt:coc_locator(), string()) -> +-spec read_max_filenum(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> non_neg_integer(). -read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix) -> - case file:read_file_info(make_config_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix)) of +read_max_filenum(DataDir, NS, Locator, Prefix) -> + case file:read_file_info(make_config_filename(DataDir, NS, Locator, Prefix)) of {error, enoent} -> 0; {ok, FI} -> @@ -194,11 +195,11 @@ read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix) -> %% @doc Increase the file size of a config file, which is used as the %% basis for a minimum sequence number. --spec increment_max_filenum(string(), machi_dt:coc_namespace(), machi_dt:coc_locator(), string()) -> +-spec increment_max_filenum(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> ok | {error, term()}. -increment_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix) -> +increment_max_filenum(DataDir, NS, Locator, Prefix) -> try - {ok, FH} = file:open(make_config_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix), [append]), + {ok, FH} = file:open(make_config_filename(DataDir, NS, Locator, Prefix), [append]), ok = file:write(FH, "x"), ok = file:sync(FH), ok = file:close(FH) @@ -287,12 +288,25 @@ int_to_hexbin(I, I_size) -> checksum_chunk(Chunk) when is_binary(Chunk); is_list(Chunk) -> crypto:hash(sha, Chunk). +convert_csum_tag(A) when is_atom(A)-> + A; +convert_csum_tag(?CSUM_TAG_NONE) -> + ?CSUM_TAG_NONE_ATOM; +convert_csum_tag(?CSUM_TAG_CLIENT_SHA) -> + ?CSUM_TAG_CLIENT_SHA_ATOM; +convert_csum_tag(?CSUM_TAG_SERVER_SHA) -> + ?CSUM_TAG_SERVER_SHA_ATOM; +convert_csum_tag(?CSUM_TAG_SERVER_REGEN_SHA) -> + ?CSUM_TAG_SERVER_REGEN_SHA_ATOM. + %% @doc Create a tagged checksum make_tagged_csum(none) -> <>; +make_tagged_csum(<<>>) -> + <>; make_tagged_csum({Tag, CSum}) -> - make_tagged_csum(Tag, CSum). + make_tagged_csum(convert_csum_tag(Tag), CSum). %% @doc Makes tagged csum. Each meanings are: %% none / ?CSUM_TAG_NONE @@ -431,3 +445,10 @@ bool2int(true) -> 1; bool2int(false) -> 0. int2bool(0) -> false; int2bool(I) when is_integer(I) -> true. + +ns_info_default(#ns_info{}=NSInfo) -> + NSInfo; +ns_info_default(undefined) -> + #ns_info{}. + + diff --git a/src/machi_yessir_client.erl b/src/machi_yessir_client.erl index 1bdef2a..b26298a 100644 --- a/src/machi_yessir_client.erl +++ b/src/machi_yessir_client.erl @@ -22,6 +22,8 @@ -module(machi_yessir_client). +-ifdef(TODO_refactoring_deferred). + -include("machi.hrl"). -include("machi_projection.hrl"). @@ -509,3 +511,5 @@ disconnect(#yessir{name=Name}) -> %% =INFO REPORT==== 17-May-2015::18:57:52 === %% Repair success: tail a of [a] finished ap_mode repair ID {a,{1431,856671,140404}}: ok %% Stats [{t_in_files,0},{t_in_chunks,10413},{t_in_bytes,682426368},{t_out_files,0},{t_out_chunks,10413},{t_out_bytes,682426368},{t_bad_chunks,0},{t_elapsed_seconds,1.591}] + +-endif. % TODO_refactoring_deferred diff --git a/test/machi_admin_util_test.erl b/test/machi_admin_util_test.erl index 1ebbbf3..cd4d813 100644 --- a/test/machi_admin_util_test.erl +++ b/test/machi_admin_util_test.erl @@ -44,6 +44,8 @@ verify_file_checksums_test2() -> TcpPort = 32958, DataDir = "./data", W_props = [{initial_wedged, false}], + NSInfo = undefined, + NoCSum = <<>>, try machi_test_util:start_flu_package(verify1_flu, TcpPort, DataDir, W_props), @@ -51,8 +53,8 @@ verify_file_checksums_test2() -> try Prefix = <<"verify_prefix">>, NumChunks = 10, - [{ok, _} = ?FLU_C:append_chunk(Sock1, ?DUMMY_PV1_EPOCH, - Prefix, <>) || + [{ok, _} = ?FLU_C:append_chunk(Sock1, NSInfo, ?DUMMY_PV1_EPOCH, + Prefix, <>, NoCSum) || X <- lists:seq(1, NumChunks)], {ok, [{_FileSize,File}]} = ?FLU_C:list_files(Sock1, ?DUMMY_PV1_EPOCH), ?assertEqual({ok, []}, diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 7d87d35..9c70474 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -118,7 +118,10 @@ append(CRIndex, Bin, #state{verbose=V}=S) -> {_SimSelfName, C} = lists:nth(CRIndex, CRList), Prefix = <<"pre">>, Len = byte_size(Bin), - Res = (catch machi_cr_client:append_chunk(C, Prefix, Bin, {sec(1), sec(1)})), + NSInfo = #ns_info{}, + NoCSum = <<>>, + Opts1 = #append_opts{}, + Res = (catch machi_cr_client:append_chunk(C, NSInfo, Prefix, Bin, NoCSum, Opts1, sec(1))), case Res of {ok, {_Off, Len, _FileName}=Key} -> case ets:insert_new(?WRITTEN_TAB, {Key, Bin}) of @@ -427,7 +430,7 @@ confirm_result(_T) -> 0 -> ok; _ -> DumpFailed = filename:join(DirBase, "dump-failed-" ++ Suffix), - ?V("Dump failed ETS tab to: ~w~n", [DumpFailed]), + ?V("Dump failed ETS tab to: ~s~n", [DumpFailed]), ets:tab2file(?FAILED_TAB, DumpFailed) end, case Critical of diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index 5179fc8..e4f1171 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -107,6 +107,8 @@ smoke_test2() -> try Prefix = <<"pre">>, Chunk1 = <<"yochunk">>, + NSInfo = undefined, + NoCSum = <<>>, Host = "localhost", PortBase = 64454, Os = [{ignore_stability_time, true}, {active_mode, false}], @@ -114,12 +116,12 @@ smoke_test2() -> %% Whew ... ok, now start some damn tests. {ok, C1} = machi_cr_client:start_link([P || {_,P}<-orddict:to_list(D)]), - machi_cr_client:append_chunk(C1, Prefix, Chunk1), + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), {ok, {Off1,Size1,File1}} = - machi_cr_client:append_chunk(C1, Prefix, Chunk1), - Chunk1_badcs = {<>, Chunk1}, + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, {error, bad_checksum} = - machi_cr_client:append_chunk(C1, Prefix, Chunk1_badcs), + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = machi_cr_client:read_chunk(C1, File1, Off1, Size1, []), {ok, PPP} = machi_flu1_client:read_latest_projection(Host, PortBase+0, @@ -173,18 +175,19 @@ smoke_test2() -> true = is_binary(KludgeBin), {error, bad_arg} = machi_cr_client:checksum_list(C1, <<"!!!!">>), -io:format(user, "\nFiles = ~p\n", [machi_cr_client:list_files(C1)]), + io:format(user, "\nFiles = ~p\n", [machi_cr_client:list_files(C1)]), %% Exactly one file right now, e.g., %% {ok,[{2098202,<<"pre^b144ef13-db4d-4c9f-96e7-caff02dc754f^1">>}]} {ok, [_]} = machi_cr_client:list_files(C1), - %% Go back and test append_chunk_extra() and write_chunk() + %% Go back and test append_chunk() + extra and write_chunk() Chunk10 = <<"It's a different chunk!">>, Size10 = byte_size(Chunk10), Extra10 = 5, + Opts1 = #append_opts{chunk_extra=Extra10*Size10}, {ok, {Off10,Size10,File10}} = - machi_cr_client:append_chunk_extra(C1, Prefix, Chunk10, - Extra10 * Size10), + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, + NoCSum, Opts1), {ok, {[{_, Off10, Chunk10, _}], []}} = machi_cr_client:read_chunk(C1, File10, Off10, Size10, []), [begin @@ -198,7 +201,7 @@ io:format(user, "\nFiles = ~p\n", [machi_cr_client:list_files(C1)]), machi_cr_client:read_chunk(C1, File10, Offx, Size10, []) end || Seq <- lists:seq(1, Extra10)], {ok, {Off11,Size11,File11}} = - machi_cr_client:append_chunk(C1, Prefix, Chunk10), + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, NoCSum), %% %% Double-check that our reserved extra bytes were really honored! %% true = (Off11 > (Off10 + (Extra10 * Size10))), io:format(user, "\nFiles = ~p\n", [machi_cr_client:list_files(C1)]), @@ -224,6 +227,8 @@ witness_smoke_test2() -> try Prefix = <<"pre">>, Chunk1 = <<"yochunk">>, + NSInfo = undefined, + NoCSum = <<>>, Host = "localhost", PortBase = 64444, Os = [{ignore_stability_time, true}, {active_mode, false}, @@ -233,12 +238,13 @@ witness_smoke_test2() -> %% Whew ... ok, now start some damn tests. {ok, C1} = machi_cr_client:start_link([P || {_,P}<-orddict:to_list(D)]), - {ok, _} = machi_cr_client:append_chunk(C1, Prefix, Chunk1), + {ok, _} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, + Chunk1, NoCSum), {ok, {Off1,Size1,File1}} = - machi_cr_client:append_chunk(C1, Prefix, Chunk1), - Chunk1_badcs = {<>, Chunk1}, + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, {error, bad_checksum} = - machi_cr_client:append_chunk(C1, Prefix, Chunk1_badcs), + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = machi_cr_client:read_chunk(C1, File1, Off1, Size1, []), @@ -270,7 +276,8 @@ witness_smoke_test2() -> machi_cr_client:read_chunk(C1, File1, Off1, Size1, []), %% But because the head is wedged, an append will fail. {error, partition} = - machi_cr_client:append_chunk(C1, Prefix, Chunk1, 1*1000), + machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum, + #append_opts{}, 1*1000), %% The witness's wedge status should cause timeout/partition %% for write_chunk also. diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index a1d098a..03097e4 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -91,6 +91,8 @@ flu_smoke_test() -> Host = "localhost", TcpPort = 12957, DataDir = "./data", + NSInfo = undefined, + NoCSum = <<>>, Prefix = <<"prefix!">>, BadPrefix = BadFile = "no/good", W_props = [{initial_wedged, false}], @@ -108,17 +110,17 @@ flu_smoke_test() -> {ok, {false, _}} = ?FLU_C:wedge_status(Host, TcpPort), Chunk1 = <<"yo!">>, - {ok, {Off1,Len1,File1}} = ?FLU_C:append_chunk(Host, TcpPort, + {ok, {Off1,Len1,File1}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - Prefix, Chunk1), + Prefix, Chunk1, NoCSum), {ok, {[{_, Off1, Chunk1, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH, File1, Off1, Len1, []), {ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort, ?DUMMY_PV1_EPOCH, File1), true = is_binary(KludgeBin), - {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, + {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - BadPrefix, Chunk1), + BadPrefix, Chunk1, NoCSum), {ok, [{_,File1}]} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), Len1 = size(Chunk1), {error, not_written} = ?FLU_C:read_chunk(Host, TcpPort, @@ -135,16 +137,19 @@ flu_smoke_test() -> %% ?DUMMY_PV1_EPOCH, %% File1, Off1, Len1*9999), - {ok, {Off1b,Len1b,File1b}} = ?FLU_C:append_chunk(Host, TcpPort, + {ok, {Off1b,Len1b,File1b}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - Prefix, Chunk1), + Prefix, Chunk1,NoCSum), Extra = 42, - {ok, {Off1c,Len1c,File1c}} = ?FLU_C:append_chunk_extra(Host, TcpPort, + Opts1 = #append_opts{chunk_extra=Extra}, + {ok, {Off1c,Len1c,File1c}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - Prefix, Chunk1, Extra), + Prefix, Chunk1, NoCSum, + Opts1, infinity), {ok, {Off1d,Len1d,File1d}} = ?FLU_C:append_chunk(Host, TcpPort, + NSInfo, ?DUMMY_PV1_EPOCH, - Prefix, Chunk1), + Prefix, Chunk1,NoCSum), if File1b == File1c, File1c == File1d -> true = (Off1c == Off1b + Len1b), true = (Off1d == Off1c + Len1c + Extra); @@ -152,11 +157,6 @@ flu_smoke_test() -> exit(not_mandatory_but_test_expected_same_file_fixme) end, - Chunk1_cs = {<>, Chunk1}, - {ok, {Off1e,Len1e,File1e}} = ?FLU_C:append_chunk(Host, TcpPort, - ?DUMMY_PV1_EPOCH, - Prefix, Chunk1_cs), - Chunk2 = <<"yo yo">>, Len2 = byte_size(Chunk2), Off2 = ?MINIMUM_OFFSET + 77, @@ -238,13 +238,15 @@ bad_checksum_test() -> DataDir = "./data.bct", Opts = [{initial_wedged, false}], {_,_,_} = machi_test_util:start_flu_package(projection_test_flu, TcpPort, DataDir, Opts), + NSInfo = undefined, try Prefix = <<"some prefix">>, Chunk1 = <<"yo yo yo">>, - Chunk1_badcs = {<>, Chunk1}, - {error, bad_checksum} = ?FLU_C:append_chunk(Host, TcpPort, + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, + {error, bad_checksum} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - Prefix, Chunk1_badcs), + Prefix, + Chunk1, BadCSum), ok after machi_test_util:stop_flu_package() @@ -256,6 +258,8 @@ witness_test() -> DataDir = "./data.witness", Opts = [{initial_wedged, false}, {witness_mode, true}], {_,_,_} = machi_test_util:start_flu_package(projection_test_flu, TcpPort, DataDir, Opts), + NSInfo = undefined, + NoCSum = <<>>, try Prefix = <<"some prefix">>, Chunk1 = <<"yo yo yo">>, @@ -268,8 +272,8 @@ witness_test() -> {ok, EpochID1} = ?FLU_C:get_latest_epochid(Host, TcpPort, private), %% Witness-protected ops all fail - {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, EpochID1, - Prefix, Chunk1), + {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, EpochID1, + Prefix, Chunk1, NoCSum), File = <<"foofile">>, {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, EpochID1, File, 9999, 9999, []), diff --git a/test/machi_flu_psup_test.erl b/test/machi_flu_psup_test.erl index 378ff74..14d4640 100644 --- a/test/machi_flu_psup_test.erl +++ b/test/machi_flu_psup_test.erl @@ -85,9 +85,12 @@ partial_stop_restart2() -> machi_flu1_client:wedge_status(Addr, TcpPort) end, Append = fun({_,#p_srvr{address=Addr, port=TcpPort}}, EpochID) -> + NSInfo = undefined, + NoCSum = <<>>, machi_flu1_client:append_chunk(Addr, TcpPort, - EpochID, - <<"prefix">>, <<"data">>) + NSInfo, EpochID, + <<"prefix">>, + <<"data">>, NoCSum) end, try [Start(P) || P <- Ps], diff --git a/test/machi_pb_high_client_test.erl b/test/machi_pb_high_client_test.erl index 16b125c..2371076 100644 --- a/test/machi_pb_high_client_test.erl +++ b/test/machi_pb_high_client_test.erl @@ -24,6 +24,7 @@ -ifdef(TEST). -ifndef(PULSE). +-include("machi.hrl"). -include("machi_pb.hrl"). -include("machi_projection.hrl"). -include_lib("eunit/include/eunit.hrl"). @@ -59,13 +60,16 @@ smoke_test2() -> CoC_l = 0, % CoC_locator (not implemented) Prefix = <<"prefix">>, Chunk1 = <<"Hello, chunk!">>, + NS = "", + NoCSum = <<>>, + Opts1 = #append_opts{}, {ok, {Off1, Size1, File1}} = - ?C:append_chunk(Clnt, CoC_n, CoC_l, Prefix, Chunk1, none, 0), + ?C:append_chunk(Clnt, NS, Prefix, Chunk1, NoCSum, Opts1), true = is_binary(File1), Chunk2 = "It's another chunk", CSum2 = {client_sha, machi_util:checksum_chunk(Chunk2)}, {ok, {Off2, Size2, File2}} = - ?C:append_chunk(Clnt, CoC_n, CoC_l, Prefix, Chunk2, CSum2, 1024), + ?C:append_chunk(Clnt, NS, Prefix, Chunk2, CSum2, Opts1), Chunk3 = ["This is a ", <<"test,">>, 32, [["Hello, world!"]]], File3 = File2, Off3 = Off2 + iolist_size(Chunk2), @@ -110,8 +114,8 @@ smoke_test2() -> LargeBytes = binary:copy(<<"x">>, 1024*1024), LBCsum = {client_sha, machi_util:checksum_chunk(LargeBytes)}, {ok, {Offx, Sizex, Filex}} = - ?C:append_chunk(Clnt, CoC_n, CoC_l, - Prefix, LargeBytes, LBCsum, 0), + ?C:append_chunk(Clnt, NS, + Prefix, LargeBytes, LBCsum, Opts1), ok = ?C:trim_chunk(Clnt, Filex, Offx, Sizex), %% Make sure everything was trimmed diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index b8556b7..c280072 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -36,6 +36,8 @@ api_smoke_test() -> DataDir = "./data.api_smoke_flu", W_props = [{active_mode, false},{initial_wedged, false}], Prefix = <<"prefix">>, + NSInfo = undefined, + NoCSum = <<>>, try {[I], _, _} = machi_test_util:start_flu_package( @@ -43,35 +45,40 @@ api_smoke_test() -> {ok, Prox1} = ?MUT:start_link(I), try FakeEpoch = ?DUMMY_PV1_EPOCH, - [{ok, {_,_,_}} = ?MUT:append_chunk(Prox1, - FakeEpoch, Prefix, <<"data">>, - infinity) || _ <- lists:seq(1,5)], + [{ok, {_,_,_}} = ?MUT:append_chunk( + Prox1, NSInfo, FakeEpoch, + Prefix, <<"data">>, NoCSum) || + _ <- lists:seq(1,5)], %% Stop the FLU, what happens? machi_test_util:stop_flu_package(), - [{error,partition} = ?MUT:append_chunk(Prox1, + [{error,partition} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, <<"data-stopped1">>, - infinity) || _ <- lists:seq(1,3)], + NoCSum) || _ <- lists:seq(1,3)], %% Start the FLU again, we should be able to do stuff immediately machi_test_util:start_flu_package(RegName, TcpPort, DataDir, [no_cleanup|W_props]), MyChunk = <<"my chunk data">>, {ok, {MyOff,MySize,MyFile}} = - ?MUT:append_chunk(Prox1, FakeEpoch, Prefix, MyChunk, - infinity), + ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, + NoCSum), {ok, {[{_, MyOff, MyChunk, _}], []}} = ?MUT:read_chunk(Prox1, FakeEpoch, MyFile, MyOff, MySize, []), MyChunk2 = <<"my chunk data, yeah, again">>, + Opts1 = #append_opts{chunk_extra=4242}, {ok, {MyOff2,MySize2,MyFile2}} = - ?MUT:append_chunk_extra(Prox1, FakeEpoch, Prefix, - MyChunk2, 4242, infinity), + ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, + MyChunk2, NoCSum, Opts1, infinity), {ok, {[{_, MyOff2, MyChunk2, _}], []}} = ?MUT:read_chunk(Prox1, FakeEpoch, MyFile2, MyOff2, MySize2, []), - MyChunk_badcs = {<>, MyChunk}, - {error, bad_checksum} = ?MUT:append_chunk(Prox1, FakeEpoch, - Prefix, MyChunk_badcs), - {error, bad_checksum} = ?MUT:write_chunk(Prox1, FakeEpoch, - <<"foo-file^^0^1^1">>, 99832, - MyChunk_badcs), + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, + {error, bad_checksum} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, + Prefix, MyChunk, BadCSum), + Opts2 = #append_opts{chunk_extra=99832}, +io:format(user, "\nTODO: fix write_chunk() call below @ ~s LINE ~w\n", [?MODULE,?LINE]), + %% {error, bad_checksum} = ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, + %% <<"foo-file^^0^1^1">>, + %% MyChunk, BadCSum, + %% Opts2, infinity), %% Put kick_projection_reaction() in the middle of the test so %% that any problems with its async nature will (hopefully) @@ -111,6 +118,8 @@ flu_restart_test2() -> TcpPort = 17125, DataDir = "./data.api_smoke_flu2", W_props = [{initial_wedged, false}, {active_mode, false}], + NSInfo = undefined, + NoCSum = <<>>, try {[I], _, _} = machi_test_util:start_flu_package( @@ -120,9 +129,8 @@ flu_restart_test2() -> FakeEpoch = ?DUMMY_PV1_EPOCH, Data = <<"data!">>, Dataxx = <<"Fake!">>, - {ok, {Off1,Size1,File1}} = ?MUT:append_chunk(Prox1, - FakeEpoch, <<"prefix">>, Data, - infinity), + {ok, {Off1,Size1,File1}} = ?MUT:append_chunk(Prox1, NSInfo, + FakeEpoch, <<"prefix">>, Data, NoCSum), P_a = #p_srvr{name=a, address="localhost", port=6622}, P1 = machi_projection:new(1, RegName, [P_a], [], [RegName], [], []), P1xx = P1#projection_v1{dbg2=["dbg2 changes are ok"]}, @@ -146,6 +154,7 @@ flu_restart_test2() -> %% makes the code a bit convoluted. (No LFE or %% Elixir macros here, alas, they'd be useful.) + AppendOpts1 = #append_opts{chunk_extra=42}, ExpectedOps = [ fun(run) -> ?assertEqual({ok, EpochID}, ?MUT:get_epoch_id(Prox1)), @@ -227,20 +236,22 @@ flu_restart_test2() -> (stop) -> ?MUT:get_all_projections(Prox1, private) end, fun(run) -> {ok, {_,_,_}} = - ?MUT:append_chunk(Prox1, FakeEpoch, - <<"prefix">>, Data, infinity), + ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, + <<"prefix">>, Data, NoCSum), ok; (line) -> io:format("line ~p, ", [?LINE]); - (stop) -> ?MUT:append_chunk(Prox1, FakeEpoch, - <<"prefix">>, Data, infinity) + (stop) -> ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, + <<"prefix">>, Data, NoCSum) end, fun(run) -> {ok, {_,_,_}} = - ?MUT:append_chunk_extra(Prox1, FakeEpoch, - <<"prefix">>, Data, 42, infinity), + ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, + <<"prefix">>, Data, NoCSum, + AppendOpts1, infinity), ok; (line) -> io:format("line ~p, ", [?LINE]); - (stop) -> ?MUT:append_chunk_extra(Prox1, FakeEpoch, - <<"prefix">>, Data, 42, infinity) + (stop) -> ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, + <<"prefix">>, Data, NoCSum, + AppendOpts1, infinity) end, fun(run) -> {ok, {[{_, Off1, Data, _}], []}} = ?MUT:read_chunk(Prox1, FakeEpoch, diff --git a/test/machi_test_util.erl b/test/machi_test_util.erl index ff908b7..70b02af 100644 --- a/test/machi_test_util.erl +++ b/test/machi_test_util.erl @@ -83,7 +83,7 @@ stop_machi_sup() -> undefined -> ok; Pid -> catch exit(whereis(machi_sup), normal), - machi_util:wait_for_death(Pid, 30) + machi_util:wait_for_death(Pid, 100) end. clean_up(FluInfo) -> -- 2.45.2 From 6089ee685111ed39ce638af13aab49e4fad72c60 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 28 Dec 2015 12:58:26 +0900 Subject: [PATCH 03/53] read_chunk API refactoring; all tests pass; todo tasks remain --- src/machi.proto | 57 ++++++++++++++++---------- src/machi_admin_util.erl | 4 +- src/machi_chain_repair.erl | 4 +- src/machi_cr_client.erl | 58 +++++++++++++-------------- src/machi_flu1_client.erl | 22 +++++----- src/machi_flu1_net_server.erl | 13 ++++-- src/machi_pb_translate.erl | 14 ++++--- src/machi_proxy_flu1_client.erl | 18 +++++---- src/machi_util.erl | 2 +- test/machi_ap_repair_eqc.erl | 4 +- test/machi_cr_client_test.erl | 30 +++++++------- test/machi_flu1_test.erl | 17 ++++---- test/machi_flu_psup_test.erl | 4 +- test/machi_proxy_flu1_client_test.erl | 8 ++-- 14 files changed, 146 insertions(+), 109 deletions(-) diff --git a/src/machi.proto b/src/machi.proto index 462be0a..9aa90a6 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -170,15 +170,18 @@ message Mpb_AuthResp { // High level API: append_chunk() request & response message Mpb_AppendChunkReq { + // General namespace arguments /* In single chain/non-clustered environment, use namespace="" */ required string namespace = 1; - required string prefix = 3; - required bytes chunk = 4; - required Mpb_ChunkCSum csum = 5; - optional uint32 chunk_extra = 6; - optional string preferred_file_name = 7; + + required string prefix = 10; + required bytes chunk = 11; + required Mpb_ChunkCSum csum = 12; + + optional uint32 chunk_extra = 20; + optional string preferred_file_name = 21; /* Fail the operation if our preferred file name is not available */ - optional uint32 flag_fail_preferred = 8; + optional uint32 flag_fail_preferred = 22; } message Mpb_AppendChunkResp { @@ -200,19 +203,22 @@ message Mpb_WriteChunkResp { // High level API: read_chunk() request & response message Mpb_ReadChunkReq { - required Mpb_ChunkPos chunk_pos = 1; + // No namespace arguments are required because NS is embedded + // inside of the file name. + + required Mpb_ChunkPos chunk_pos = 10; // Use flag_no_checksum=non-zero to skip returning the chunk's checksum. // TODO: not implemented yet. - optional uint32 flag_no_checksum = 2 [default=0]; + optional uint32 flag_no_checksum = 20 [default=0]; // Use flag_no_chunk=non-zero to skip returning the chunk (which // only makes sense if flag_no_checksum is not set). // TODO: not implemented yet. - optional uint32 flag_no_chunk = 3 [default=0]; + optional uint32 flag_no_chunk = 21 [default=0]; // TODO: not implemented yet. - optional uint32 flag_needs_trimmed = 4 [default=0]; + optional uint32 flag_needs_trimmed = 22 [default=0]; } message Mpb_ReadChunkResp { @@ -380,17 +386,20 @@ message Mpb_ProjectionV1 { // Low level API: append_chunk() message Mpb_LL_AppendChunkReq { + // General namespace arguments required uint32 namespace_version = 1; required string namespace = 2; required uint32 locator = 3; - required Mpb_EpochID epoch_id = 4; - required string prefix = 5; - required bytes chunk = 6; - required Mpb_ChunkCSum csum = 7; - optional uint32 chunk_extra = 8; - optional string preferred_file_name = 9; + + required Mpb_EpochID epoch_id = 10; + required string prefix = 11; + required bytes chunk = 12; + required Mpb_ChunkCSum csum = 13; + + optional uint32 chunk_extra = 20; + optional string preferred_file_name = 21; /* Fail the operation if our preferred file name is not available */ - optional uint32 flag_fail_preferred = 10; + optional uint32 flag_fail_preferred = 22; } message Mpb_LL_AppendChunkResp { @@ -413,19 +422,23 @@ message Mpb_LL_WriteChunkResp { // Low level API: read_chunk() message Mpb_LL_ReadChunkReq { - required Mpb_EpochID epoch_id = 1; - required Mpb_ChunkPos chunk_pos = 2; + // General namespace arguments + required uint32 namespace_version = 1; + required string namespace = 2; + + required Mpb_EpochID epoch_id = 10; + required Mpb_ChunkPos chunk_pos = 11; // Use flag_no_checksum=non-zero to skip returning the chunk's checksum. // TODO: not implemented yet. - optional uint32 flag_no_checksum = 3 [default=0]; + optional uint32 flag_no_checksum = 20 [default=0]; // Use flag_no_chunk=non-zero to skip returning the chunk (which // only makes sense if flag_checksum is not set). // TODO: not implemented yet. - optional uint32 flag_no_chunk = 4 [default=0]; + optional uint32 flag_no_chunk = 21 [default=0]; - optional uint32 flag_needs_trimmed = 5 [default=0]; + optional uint32 flag_needs_trimmed = 22 [default=0]; } message Mpb_LL_ReadChunkResp { diff --git a/src/machi_admin_util.erl b/src/machi_admin_util.erl index 46f6c3d..9bafd8a 100644 --- a/src/machi_admin_util.erl +++ b/src/machi_admin_util.erl @@ -90,8 +90,10 @@ verify_file_checksums_local2(Sock1, EpochID, Path0) -> end. verify_file_checksums_remote2(Sock1, EpochID, File) -> + NSInfo = undefined, + io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), ReadChunk = fun(File_name, Offset, Size) -> - ?FLU_C:read_chunk(Sock1, EpochID, + ?FLU_C:read_chunk(Sock1, NSInfo, EpochID, File_name, Offset, Size, []) end, verify_file_checksums_common(Sock1, EpochID, File, ReadChunk). diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index ee12b20..836689a 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -332,9 +332,11 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> end, _T1 = os:timestamp(), %% TODO: support case multiple written or trimmed chunks returned + NSInfo = undefined, + io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), {ok, {[{_, Offset, Chunk, _}], _}} = machi_proxy_flu1_client:read_chunk( - SrcP, EpochID, File, Offset, Size, [], + SrcP, NSInfo, EpochID, File, Offset, Size, [], ?SHORT_TIMEOUT), _T2 = os:timestamp(), <<_Tag:1/binary, CSum/binary>> = TaggedCSum, diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index c45823a..8c97db2 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -121,7 +121,7 @@ append_chunk/5, append_chunk/6, append_chunk/7, write_chunk/4, write_chunk/5, - read_chunk/5, read_chunk/6, + read_chunk/6, read_chunk/7, trim_chunk/4, trim_chunk/5, checksum_list/2, checksum_list/3, list_files/1, list_files/2, @@ -198,14 +198,14 @@ write_chunk(PidSpec, File, Offset, Chunk, Timeout0) -> %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -read_chunk(PidSpec, File, Offset, Size, Opts) -> - read_chunk(PidSpec, File, Offset, Size, Opts, ?DEFAULT_TIMEOUT). +read_chunk(PidSpec, NSInfo, File, Offset, Size, Opts) -> + read_chunk(PidSpec, NSInfo, File, Offset, Size, Opts, ?DEFAULT_TIMEOUT). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -read_chunk(PidSpec, File, Offset, Size, Opts, Timeout0) -> +read_chunk(PidSpec, NSInfo, File, Offset, Size, Opts, Timeout0) -> {TO, Timeout} = timeout(Timeout0), - gen_server:call(PidSpec, {req, {read_chunk, File, Offset, Size, Opts, TO}}, + gen_server:call(PidSpec, {req, {read_chunk, NSInfo, File, Offset, Size, Opts, TO}}, Timeout). %% @doc Trim a chunk of data of size `Size' from `File' at `Offset'. @@ -288,8 +288,8 @@ handle_call2({append_chunk, NSInfo, Chunk, CSum, Opts, 0, os:timestamp(), TO, S); handle_call2({write_chunk, File, Offset, Chunk, TO}, _From, S) -> do_write_head(File, Offset, Chunk, 0, os:timestamp(), TO, S); -handle_call2({read_chunk, File, Offset, Size, Opts, TO}, _From, S) -> - do_read_chunk(File, Offset, Size, Opts, 0, os:timestamp(), TO, S); +handle_call2({read_chunk, NSInfo, File, Offset, Size, Opts, TO}, _From, S) -> + do_read_chunk(NSInfo, File, Offset, Size, Opts, 0, os:timestamp(), TO, S); handle_call2({trim_chunk, File, Offset, Size, TO}, _From, S) -> do_trim_chunk(File, Offset, Size, 0, os:timestamp(), TO, S); handle_call2({checksum_list, File, TO}, _From, S) -> @@ -534,10 +534,10 @@ NSInfo=todo,Prefix=todo,CSum=todo,Opts=todo, iolist_size(Chunk)}) end. -do_read_chunk(File, Offset, Size, Opts, 0=Depth, STime, TO, +do_read_chunk(NSInfo, File, Offset, Size, Opts, 0=Depth, STime, TO, #state{proj=#projection_v1{upi=[_|_]}}=S) -> % UPI is non-empty - do_read_chunk2(File, Offset, Size, Opts, Depth + 1, STime, TO, S); -do_read_chunk(File, Offset, Size, Opts, Depth, STime, TO, #state{proj=P}=S) -> + do_read_chunk2(NSInfo, File, Offset, Size, Opts, Depth + 1, STime, TO, S); +do_read_chunk(NSInfo, File, Offset, Size, Opts, Depth, STime, TO, #state{proj=P}=S) -> sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > TO -> @@ -547,18 +547,18 @@ do_read_chunk(File, Offset, Size, Opts, Depth, STime, TO, #state{proj=P}=S) -> case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_read_chunk(File, Offset, Size, Opts, Depth + 1, STime, TO, S2); + do_read_chunk(NSInfo, File, Offset, Size, Opts, Depth + 1, STime, TO, S2); _ -> - do_read_chunk2(File, Offset, Size, Opts, Depth + 1, STime, TO, S2) + do_read_chunk2(NSInfo, File, Offset, Size, Opts, Depth + 1, STime, TO, S2) end end. -do_read_chunk2(File, Offset, Size, Opts, Depth, STime, TO, +do_read_chunk2(NSInfo, File, Offset, Size, Opts, Depth, STime, TO, #state{proj=P, epoch_id=EpochID, proxies_dict=PD}=S) -> UPI = readonly_flus(P), Tail = lists:last(UPI), ConsistencyMode = P#projection_v1.mode, - case ?FLU_PC:read_chunk(orddict:fetch(Tail, PD), EpochID, + case ?FLU_PC:read_chunk(orddict:fetch(Tail, PD), NSInfo, EpochID, File, Offset, Size, Opts, ?TIMEOUT) of {ok, {Chunks, Trimmed}} when is_list(Chunks), is_list(Trimmed) -> %% After partition heal, there could happen that heads may @@ -583,9 +583,9 @@ do_read_chunk2(File, Offset, Size, Opts, Depth, STime, TO, {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_read_chunk(File, Offset, Size, Opts, Depth, STime, TO, S); + do_read_chunk(NSInfo, File, Offset, Size, Opts, Depth, STime, TO, S); {error, not_written} -> - read_repair(ConsistencyMode, read, File, Offset, Size, Depth, STime, S); + read_repair(ConsistencyMode, read, NSInfo, File, Offset, Size, Depth, STime, S); %% {reply, {error, not_written}, S}; {error, written} -> exit({todo_should_never_happen,?MODULE,?LINE,File,Offset,Size}); @@ -717,11 +717,11 @@ do_trim_midtail2([FLU|RestFLUs]=FLUs, Prefix, File, Offset, Size, %% Never matches because Depth is always incremented beyond 0 prior to %% getting here. %% -%% read_repair(ConsistencyMode, ReturnMode, File, Offset, Size, 0=Depth, +%% read_repair(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, 0=Depth, %% STime, #state{proj=#projection_v1{upi=[_|_]}}=S) -> % UPI is non-empty -%% read_repair2(ConsistencyMode, ReturnMode, File, Offset, Size, Depth + 1, +%% read_repair2(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, Depth + 1, %% STime, S); -read_repair(ConsistencyMode, ReturnMode, File, Offset, Size, Depth, +read_repair(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, Depth, STime, #state{proj=P}=S) -> sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, @@ -732,20 +732,20 @@ read_repair(ConsistencyMode, ReturnMode, File, Offset, Size, Depth, case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - read_repair(ConsistencyMode, ReturnMode, File, Offset, + read_repair(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, Depth + 1, STime, S2); _ -> - read_repair2(ConsistencyMode, ReturnMode, File, Offset, + read_repair2(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, Depth + 1, STime, S2) end end. read_repair2(cp_mode=ConsistencyMode, - ReturnMode, File, Offset, Size, Depth, STime, + ReturnMode, NSInfo, File, Offset, Size, Depth, STime, #state{proj=P, epoch_id=EpochID, proxies_dict=PD}=S) -> %% TODO WTF was I thinking here??.... Tail = lists:last(readonly_flus(P)), - case ?FLU_PC:read_chunk(orddict:fetch(Tail, PD), EpochID, + case ?FLU_PC:read_chunk(orddict:fetch(Tail, PD), NSInfo, EpochID, File, Offset, Size, [], ?TIMEOUT) of {ok, Chunks} when is_list(Chunks) -> %% TODO: change to {Chunks, Trimmed} and have them repaired @@ -761,7 +761,7 @@ read_repair2(cp_mode=ConsistencyMode, {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - read_repair(ConsistencyMode, ReturnMode, File, Offset, + read_repair(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, Depth, STime, S); {error, not_written} -> {reply, {error, not_written}, S}; @@ -774,10 +774,10 @@ read_repair2(cp_mode=ConsistencyMode, exit({todo_should_repair_unlinked_files, ?MODULE, ?LINE, File}) end; read_repair2(ap_mode=ConsistencyMode, - ReturnMode, File, Offset, Size, Depth, STime, + ReturnMode, NSInfo, File, Offset, Size, Depth, STime, #state{proj=P}=S) -> Eligible = mutation_flus(P), - case try_to_find_chunk(Eligible, File, Offset, Size, S) of + case try_to_find_chunk(Eligible, NSInfo, File, Offset, Size, S) of {ok, {Chunks, _Trimmed}, GotItFrom} when is_list(Chunks) -> %% TODO: Repair trimmed chunks ToRepair = mutation_flus(P) -- [GotItFrom], @@ -791,7 +791,7 @@ read_repair2(ap_mode=ConsistencyMode, {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - read_repair(ConsistencyMode, ReturnMode, File, + read_repair(ConsistencyMode, ReturnMode, NSInfo, File, Offset, Size, Depth, STime, S); {error, not_written} -> {reply, {error, not_written}, S}; @@ -1032,12 +1032,12 @@ choose_best_proj(Rs) -> BestProj end, ?WORST_PROJ, Rs). -try_to_find_chunk(Eligible, File, Offset, Size, +try_to_find_chunk(Eligible, NSInfo, File, Offset, Size, #state{epoch_id=EpochID, proxies_dict=PD}) -> Timeout = 2*1000, Work = fun(FLU) -> Proxy = orddict:fetch(FLU, PD), - case ?FLU_PC:read_chunk(Proxy, EpochID, + case ?FLU_PC:read_chunk(Proxy, NSInfo, EpochID, %% TODO Trimmed is required here File, Offset, Size, []) of {ok, {_Chunks, _} = ChunksAndTrimmed} -> diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index 10c833e..ffeebd9 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -57,7 +57,7 @@ %% File API append_chunk/6, append_chunk/7, append_chunk/8, append_chunk/9, - read_chunk/6, read_chunk/7, + read_chunk/7, read_chunk/8, checksum_list/3, checksum_list/4, list_files/2, list_files/3, wedge_status/1, wedge_status/2, @@ -149,28 +149,31 @@ append_chunk(Host, TcpPort, NSInfo0, EpochID, %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. --spec read_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), +-spec read_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), proplists:proplist()) -> {ok, machi_dt:chunk_s()} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, term()}. -read_chunk(Sock, EpochID, File, Offset, Size, Opts) +read_chunk(Sock, NSInfo0, EpochID, File, Offset, Size, Opts) when Offset >= ?MINIMUM_OFFSET, Size >= 0 -> - read_chunk2(Sock, EpochID, File, Offset, Size, Opts). + NSInfo = machi_util:ns_info_default(NSInfo0), + read_chunk2(Sock, NSInfo, EpochID, File, Offset, Size, Opts). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. --spec read_chunk(machi_dt:inet_host(), machi_dt:inet_port(), machi_dt:epoch_id(), +-spec read_chunk(machi_dt:inet_host(), machi_dt:inet_port(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), proplists:proplist()) -> {ok, machi_dt:chunk_s()} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, term()}. -read_chunk(Host, TcpPort, EpochID, File, Offset, Size, Opts) +read_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Size, Opts) when Offset >= ?MINIMUM_OFFSET, Size >= 0 -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), + NSInfo = machi_util:ns_info_default(NSInfo0), +io:format(user, "dbgyo ~s LINE ~p NSInfo0 ~p NSInfo ~p\n", [?MODULE, ?LINE, NSInfo0, NSInfo]), timer:sleep(333), try - read_chunk2(Sock, EpochID, File, Offset, Size, Opts) + read_chunk2(Sock, NSInfo, EpochID, File, Offset, Size, Opts) after disconnect(Sock) end. @@ -538,12 +541,13 @@ trunc_hack(Host, TcpPort, EpochID, File) when is_integer(TcpPort) -> %%%%%%%%%%%%%%%%%%%%%%%%%%% -read_chunk2(Sock, EpochID, File0, Offset, Size, Opts) -> +read_chunk2(Sock, NSInfo, EpochID, File0, Offset, Size, Opts) -> ReqID = <<"id">>, + #ns_info{version=NSVersion, name=NS} = NSInfo, File = machi_util:make_binary(File0), Req = machi_pb_translate:to_pb_request( ReqID, - {low_read_chunk, EpochID, File, Offset, Size, Opts}), + {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}), do_pb_request_common(Sock, ReqID, Req). append_chunk2(Sock, NSInfo, EpochID, diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 588b8d8..2bf9653 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -220,6 +220,7 @@ do_pb_ll_request(PB_request, S) -> {Rs, NewS} = do_pb_ll_request3(Cmd0, S), {RqID, Cmd0, Rs, NewS}; {RqID, Cmd0} -> + io:format(user, "TODO: epoch at pos 2 in tuple is probably broken now, wheeeeeeeeeeeeeeeeee\n", []), EpochID = element(2, Cmd0), % by common convention {Rs, NewS} = do_pb_ll_request2(EpochID, Cmd0, S), {RqID, Cmd0, Rs, NewS} @@ -246,13 +247,17 @@ do_pb_ll_request2(EpochID, CMD, S) -> end, {{error, bad_epoch}, S#state{epoch_id=CurrentEpochID}}; true -> - do_pb_ll_request3(CMD, S#state{epoch_id=CurrentEpochID}) + do_pb_ll_request2b(CMD, S#state{epoch_id=CurrentEpochID}) end. lookup_epoch(#state{epoch_tab=T}) -> %% TODO: race in shutdown to access ets table after owner dies ets:lookup_element(T, epoch, 2). +do_pb_ll_request2b(CMD, S) -> + io:format(user, "TODO: check NSVersion & NS\n", []), + do_pb_ll_request3(CMD, S). + %% Witness status does not matter below. do_pb_ll_request3({low_echo, _BogusEpochID, Msg}, S) -> {Msg, S}; @@ -286,7 +291,7 @@ do_pb_ll_request3({low_write_chunk, _EpochID, File, Offset, Chunk, CSum_tag, CSum}, #state{witness=false}=S) -> {do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, S), S}; -do_pb_ll_request3({low_read_chunk, _EpochID, File, Offset, Size, Opts}, +do_pb_ll_request3({low_read_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size, Opts}, #state{witness=false} = S) -> {do_server_read_chunk(File, Offset, Size, Opts, S), S}; do_pb_ll_request3({low_trim_chunk, _EpochID, File, Offset, Size, TriggerGC}, @@ -587,7 +592,9 @@ do_pb_hl_request2({high_write_chunk, File, Offset, ChunkBin, TaggedCSum}, {Res, S}; do_pb_hl_request2({high_read_chunk, File, Offset, Size, Opts}, #state{high_clnt=Clnt}=S) -> - Res = machi_cr_client:read_chunk(Clnt, File, Offset, Size, Opts), + NSInfo = undefined, + io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + Res = machi_cr_client:read_chunk(Clnt, NSInfo, File, Offset, Size, Opts), {Res, S}; do_pb_hl_request2({high_trim_chunk, File, Offset, Size}, #state{high_clnt=Clnt}=S) -> diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index c615912..8d49460 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -79,6 +79,8 @@ from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{ req_id=ReqID, read_chunk=#mpb_ll_readchunkreq{ + namespace_version=NSVersion, + namespace=NS, epoch_id=PB_EpochID, chunk_pos=ChunkPos, flag_no_checksum=PB_GetNoChecksum, @@ -91,7 +93,7 @@ from_pb_request(#mpb_ll_request{ #mpb_chunkpos{file_name=File, offset=Offset, chunk_size=Size} = ChunkPos, - {ReqID, {low_read_chunk, EpochID, File, Offset, Size, Opts}}; + {ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, trim_chunk=#mpb_ll_trimchunkreq{ @@ -420,7 +422,7 @@ to_pb_request(ReqID, {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, C offset=Offset, chunk=Chunk, csum=PB_CSum}}}; -to_pb_request(ReqID, {low_read_chunk, EpochID, File, Offset, Size, Opts}) -> +to_pb_request(ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}) -> PB_EpochID = conv_from_epoch_id(EpochID), FNChecksum = proplists:get_value(no_checksum, Opts, false), FNChunk = proplists:get_value(no_chunk, Opts, false), @@ -428,8 +430,10 @@ to_pb_request(ReqID, {low_read_chunk, EpochID, File, Offset, Size, Opts}) -> #mpb_ll_request{ req_id=ReqID, do_not_alter=2, read_chunk=#mpb_ll_readchunkreq{ - epoch_id=PB_EpochID, - chunk_pos=#mpb_chunkpos{ + namespace_version=NSVersion, + namespace=NS, + epoch_id=PB_EpochID, + chunk_pos=#mpb_chunkpos{ file_name=File, offset=Offset, chunk_size=Size}, @@ -528,7 +532,7 @@ to_pb_response(ReqID, {low_write_chunk, _EID, _Fl, _Off, _Ch, _CST, _CS},Resp)-> Status = conv_from_status(Resp), #mpb_ll_response{req_id=ReqID, write_chunk=#mpb_ll_writechunkresp{status=Status}}; -to_pb_response(ReqID, {low_read_chunk, _EID, _Fl, _Off, _Sz, _Opts}, Resp)-> +to_pb_response(ReqID, {low_read_chunk, _NSV, _NS, _EID, _Fl, _Off, _Sz, _Opts}, Resp)-> case Resp of {ok, {Chunks, Trimmed}} -> PB_Chunks = lists:map(fun({File, Offset, Bytes, Csum}) -> diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index 947a307..ba6c24e 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -58,7 +58,7 @@ -export([ %% File API append_chunk/6, append_chunk/8, - read_chunk/6, read_chunk/7, + read_chunk/7, read_chunk/8, checksum_list/3, checksum_list/4, list_files/2, list_files/3, wedge_status/1, wedge_status/2, @@ -118,13 +118,13 @@ append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -read_chunk(PidSpec, EpochID, File, Offset, Size, Opts) -> - read_chunk(PidSpec, EpochID, File, Offset, Size, Opts, infinity). +read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts) -> + read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts, infinity). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -read_chunk(PidSpec, EpochID, File, Offset, Size, Opts, Timeout) -> - gen_server:call(PidSpec, {req, {read_chunk, EpochID, File, Offset, Size, Opts}}, +read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts, Timeout) -> + gen_server:call(PidSpec, {req, {read_chunk, NSInfo, EpochID, File, Offset, Size, Opts}}, Timeout). %% @doc Fetch the list of chunk checksums for `File'. @@ -287,7 +287,9 @@ write_chunk(PidSpec, EpochID, File, Offset, Chunk, Timeout) -> Timeout) of {error, written}=Err -> Size = byte_size(Chunk), - case read_chunk(PidSpec, EpochID, File, Offset, Size, [], Timeout) of + NSInfo = undefined, + io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + case read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, [], Timeout) of {ok, {[{File, Offset, Chunk2, _}], []}} when Chunk2 == Chunk -> %% See equivalent comment inside write_projection(). ok; @@ -377,9 +379,9 @@ make_req_fun({append_chunk, NSInfo, EpochID, fun() -> Mod:append_chunk(Sock, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, Timeout) end; -make_req_fun({read_chunk, EpochID, File, Offset, Size, Opts}, +make_req_fun({read_chunk, NSInfo, EpochID, File, Offset, Size, Opts}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> - fun() -> Mod:read_chunk(Sock, EpochID, File, Offset, Size, Opts) end; + fun() -> Mod:read_chunk(Sock, NSInfo, EpochID, File, Offset, Size, Opts) end; make_req_fun({write_chunk, EpochID, File, Offset, Chunk}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:write_chunk(Sock, EpochID, File, Offset, Chunk) end; diff --git a/src/machi_util.erl b/src/machi_util.erl index bc885aa..59f7f2e 100644 --- a/src/machi_util.erl +++ b/src/machi_util.erl @@ -374,7 +374,7 @@ wait_for_death(Pid, Iters) when is_pid(Pid) -> false -> ok; true -> - timer:sleep(1), + timer:sleep(10), wait_for_death(Pid, Iters-1) end. diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 9c70474..6037ec6 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -455,7 +455,9 @@ assert_chunk(C, {Off, Len, FileName}=Key, Bin) -> %% TODO: This probably a bug, read_chunk respnds with filename of `string()' type FileNameStr = binary_to_list(FileName), %% TODO : Use CSum instead of binary (after disuccsion about CSum is calmed down?) - case (catch machi_cr_client:read_chunk(C, FileName, Off, Len, [], sec(3))) of + NSInfo = undefined, + io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + case (catch machi_cr_client:read_chunk(C, NSInfo, FileName, Off, Len, [], sec(3))) of {ok, {[{FileNameStr, Off, Bin, _}], []}} -> ok; {ok, Got} -> diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index e4f1171..def894e 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -123,31 +123,31 @@ smoke_test2() -> {error, bad_checksum} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, File1, Off1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, []), {ok, PPP} = machi_flu1_client:read_latest_projection(Host, PortBase+0, private), %% Verify that the client's CR wrote to all of them. [{ok, {[{_, Off1, Chunk1, _}], []}} = machi_flu1_client:read_chunk( - Host, PortBase+X, EpochID, File1, Off1, Size1, []) || + Host, PortBase+X, NSInfo, EpochID, File1, Off1, Size1, []) || X <- [0,1,2] ], %% Test read repair: Manually write to head, then verify that %% read-repair fixes all. FooOff1 = Off1 + (1024*1024), [{error, not_written} = machi_flu1_client:read_chunk( - Host, PortBase+X, EpochID, + Host, PortBase+X, NSInfo, EpochID, File1, FooOff1, Size1, []) || X <- [0,1,2] ], ok = machi_flu1_client:write_chunk(Host, PortBase+0, EpochID, File1, FooOff1, Chunk1), {ok, {[{_, FooOff1, Chunk1, _}], []}} = - machi_flu1_client:read_chunk(Host, PortBase+0, EpochID, + machi_flu1_client:read_chunk(Host, PortBase+0, NSInfo, EpochID, File1, FooOff1, Size1, []), {ok, {[{_, FooOff1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, File1, FooOff1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff1, Size1, []), [?assertMatch({X,{ok, {[{_, FooOff1, Chunk1, _}], []}}}, {X,machi_flu1_client:read_chunk( - Host, PortBase+X, EpochID, + Host, PortBase+X, NSInfo, EpochID, File1, FooOff1, Size1, [])}) || X <- [0,1,2] ], @@ -158,18 +158,18 @@ smoke_test2() -> ok = machi_flu1_client:write_chunk(Host, PortBase+1, EpochID, File1, FooOff2, Chunk2), {ok, {[{_, FooOff2, Chunk2, _}], []}} = - machi_cr_client:read_chunk(C1, File1, FooOff2, Size2, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, []), [{X,{ok, {[{_, FooOff2, Chunk2, _}], []}}} = {X,machi_flu1_client:read_chunk( - Host, PortBase+X, EpochID, + Host, PortBase+X, NSInfo, EpochID, File1, FooOff2, Size2, [])} || X <- [0,1,2] ], %% Misc API smoke & minor regression checks - {error, bad_arg} = machi_cr_client:read_chunk(C1, <<"no">>, + {error, bad_arg} = machi_cr_client:read_chunk(C1, NSInfo, <<"no">>, 999999999, 1, []), {ok, {[{_,Off1,Chunk1,_}, {_,FooOff1,Chunk1,_}, {_,FooOff2,Chunk2,_}], []}} = - machi_cr_client:read_chunk(C1, File1, Off1, 88888888, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, 88888888, []), %% Checksum list return value is a primitive binary(). {ok, KludgeBin} = machi_cr_client:checksum_list(C1, File1), true = is_binary(KludgeBin), @@ -189,16 +189,16 @@ smoke_test2() -> machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, NoCSum, Opts1), {ok, {[{_, Off10, Chunk10, _}], []}} = - machi_cr_client:read_chunk(C1, File10, Off10, Size10, []), + machi_cr_client:read_chunk(C1, NSInfo, File10, Off10, Size10, []), [begin Offx = Off10 + (Seq * Size10), %% TODO: uncomment written/not_written enforcement is available. - %% {error,not_written} = machi_cr_client:read_chunk(C1, File10, + %% {error,not_written} = machi_cr_client:read_chunk(C1, NSInfo, File10, %% Offx, Size10), {ok, {Offx,Size10,File10}} = machi_cr_client:write_chunk(C1, File10, Offx, Chunk10), {ok, {[{_, Offx, Chunk10, _}], []}} = - machi_cr_client:read_chunk(C1, File10, Offx, Size10, []) + machi_cr_client:read_chunk(C1, NSInfo, File10, Offx, Size10, []) end || Seq <- lists:seq(1, Extra10)], {ok, {Off11,Size11,File11}} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, NoCSum), @@ -246,7 +246,7 @@ witness_smoke_test2() -> {error, bad_checksum} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, File1, Off1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, []), %% Stop 'b' and let the chain reset. ok = machi_flu_psup:stop_flu_package(b), @@ -273,7 +273,7 @@ witness_smoke_test2() -> %% Chunk1 is still readable: not affected by wedged witness head. {ok, {[{_, Off1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, File1, Off1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, []), %% But because the head is wedged, an append will fail. {error, partition} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum, diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index 03097e4..45a6aa3 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -113,8 +113,9 @@ flu_smoke_test() -> {ok, {Off1,Len1,File1}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, Prefix, Chunk1, NoCSum), - {ok, {[{_, Off1, Chunk1, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH, - File1, Off1, Len1, []), + {ok, {[{_, Off1, Chunk1, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, + NSInfo, ?DUMMY_PV1_EPOCH, + File1, Off1, Len1, []), {ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort, ?DUMMY_PV1_EPOCH, File1), true = is_binary(KludgeBin), @@ -124,7 +125,7 @@ flu_smoke_test() -> {ok, [{_,File1}]} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), Len1 = size(Chunk1), {error, not_written} = ?FLU_C:read_chunk(Host, TcpPort, - ?DUMMY_PV1_EPOCH, + NSInfo, ?DUMMY_PV1_EPOCH, File1, Off1*983829323, Len1, []), %% XXX FIXME %% @@ -134,7 +135,7 @@ flu_smoke_test() -> %% of the read will cause it to fail. %% %% {error, partial_read} = ?FLU_C:read_chunk(Host, TcpPort, - %% ?DUMMY_PV1_EPOCH, + %% NSInfo, ?DUMMY_PV1_EPOCH, %% File1, Off1, Len1*9999), {ok, {Off1b,Len1b,File1b}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, @@ -166,12 +167,12 @@ flu_smoke_test() -> {error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH, BadFile, Off2, Chunk2), {ok, {[{_, Off2, Chunk2, _}], _}} = - ?FLU_C:read_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, []), + ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, []), {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, - ?DUMMY_PV1_EPOCH, + NSInfo, ?DUMMY_PV1_EPOCH, "no!!", Off2, Len2, []), {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, - ?DUMMY_PV1_EPOCH, + NSInfo, ?DUMMY_PV1_EPOCH, BadFile, Off2, Len2, []), %% We know that File1 still exists. Pretend that we've done a @@ -275,7 +276,7 @@ witness_test() -> {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, EpochID1, Prefix, Chunk1, NoCSum), File = <<"foofile">>, - {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, EpochID1, + {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, EpochID1, File, 9999, 9999, []), {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, EpochID1, File), diff --git a/test/machi_flu_psup_test.erl b/test/machi_flu_psup_test.erl index 14d4640..01ec26c 100644 --- a/test/machi_flu_psup_test.erl +++ b/test/machi_flu_psup_test.erl @@ -84,8 +84,8 @@ partial_stop_restart2() -> WedgeStatus = fun({_,#p_srvr{address=Addr, port=TcpPort}}) -> machi_flu1_client:wedge_status(Addr, TcpPort) end, + NSInfo = undefined, Append = fun({_,#p_srvr{address=Addr, port=TcpPort}}, EpochID) -> - NSInfo = undefined, NoCSum = <<>>, machi_flu1_client:append_chunk(Addr, TcpPort, NSInfo, EpochID, @@ -148,7 +148,7 @@ partial_stop_restart2() -> {error, wedged} = Append(hd(Ps), EpochID1), {_, #p_srvr{address=Addr_a, port=TcpPort_a}} = hd(Ps), {error, wedged} = machi_flu1_client:read_chunk( - Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH, + Addr_a, TcpPort_a, NSInfo, ?DUMMY_PV1_EPOCH, <<>>, 99999999, 1, []), {error, wedged} = machi_flu1_client:checksum_list( Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH, <<>>), diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index c280072..43bf97a 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -62,14 +62,14 @@ api_smoke_test() -> ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, NoCSum), {ok, {[{_, MyOff, MyChunk, _}], []}} = - ?MUT:read_chunk(Prox1, FakeEpoch, MyFile, MyOff, MySize, []), + ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile, MyOff, MySize, []), MyChunk2 = <<"my chunk data, yeah, again">>, Opts1 = #append_opts{chunk_extra=4242}, {ok, {MyOff2,MySize2,MyFile2}} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk2, NoCSum, Opts1, infinity), {ok, {[{_, MyOff2, MyChunk2, _}], []}} = - ?MUT:read_chunk(Prox1, FakeEpoch, MyFile2, MyOff2, MySize2, []), + ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile2, MyOff2, MySize2, []), BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, {error, bad_checksum} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, BadCSum), @@ -254,11 +254,11 @@ flu_restart_test2() -> AppendOpts1, infinity) end, fun(run) -> {ok, {[{_, Off1, Data, _}], []}} = - ?MUT:read_chunk(Prox1, FakeEpoch, + ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, Size1, []), ok; (line) -> io:format("line ~p, ", [?LINE]); - (stop) -> ?MUT:read_chunk(Prox1, FakeEpoch, + (stop) -> ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, Size1, []) end, fun(run) -> {ok, KludgeBin} = -- 2.45.2 From 3d730ea215b9814c21582505f5da84ace0e6c85d Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 28 Dec 2015 18:28:49 +0900 Subject: [PATCH 04/53] write_chunk API refactoring; all tests pass; todo tasks remain --- src/machi.proto | 10 ++-- src/machi_chain_repair.erl | 2 +- src/machi_cr_client.erl | 66 +++++++++++++-------------- src/machi_flu1_client.erl | 29 ++++++------ src/machi_flu1_net_server.erl | 7 ++- src/machi_pb_translate.erl | 10 ++-- src/machi_proxy_flu1_client.erl | 14 +++--- test/machi_cr_client_test.erl | 8 ++-- test/machi_flu1_test.erl | 4 +- test/machi_proxy_flu1_client_test.erl | 8 ++-- 10 files changed, 86 insertions(+), 72 deletions(-) diff --git a/src/machi.proto b/src/machi.proto index 9aa90a6..1955230 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -193,7 +193,7 @@ message Mpb_AppendChunkResp { // High level API: write_chunk() request & response message Mpb_WriteChunkReq { - required Mpb_Chunk chunk = 1; + required Mpb_Chunk chunk = 10; } message Mpb_WriteChunkResp { @@ -411,8 +411,12 @@ message Mpb_LL_AppendChunkResp { // Low level API: write_chunk() message Mpb_LL_WriteChunkReq { - required Mpb_EpochID epoch_id = 1; - required Mpb_Chunk chunk = 2; + // General namespace arguments + required uint32 namespace_version = 1; + required string namespace = 2; + + required Mpb_EpochID epoch_id = 10; + required Mpb_Chunk chunk = 11; } message Mpb_LL_WriteChunkResp { diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index 836689a..1f6dfc6 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -346,7 +346,7 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> DstP = orddict:fetch(DstFLU, ProxiesDict), _T3 = os:timestamp(), ok = machi_proxy_flu1_client:write_chunk( - DstP, EpochID, File, Offset, Chunk, + DstP, NSInfo, EpochID, File, Offset, Chunk, ?SHORT_TIMEOUT), _T4 = os:timestamp() end || DstFLU <- MyDsts], diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 8c97db2..95d86f9 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -120,7 +120,7 @@ %% File API append_chunk/5, append_chunk/6, append_chunk/7, - write_chunk/4, write_chunk/5, + write_chunk/5, write_chunk/6, read_chunk/6, read_chunk/7, trim_chunk/4, trim_chunk/5, checksum_list/2, checksum_list/3, @@ -186,14 +186,14 @@ append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum, #append_opts{}=Opts, Timeout0 %% allocated/sequenced by an earlier append_chunk() call) to %% `File' at `Offset'. -write_chunk(PidSpec, File, Offset, Chunk) -> - write_chunk(PidSpec, File, Offset, Chunk, ?DEFAULT_TIMEOUT). +write_chunk(PidSpec, NSInfo, File, Offset, Chunk) -> + write_chunk(PidSpec, NSInfo, File, Offset, Chunk, ?DEFAULT_TIMEOUT). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -write_chunk(PidSpec, File, Offset, Chunk, Timeout0) -> +write_chunk(PidSpec, NSInfo, File, Offset, Chunk, Timeout0) -> {TO, Timeout} = timeout(Timeout0), - gen_server:call(PidSpec, {req, {write_chunk, File, Offset, Chunk, TO}}, + gen_server:call(PidSpec, {req, {write_chunk, NSInfo, File, Offset, Chunk, TO}}, Timeout). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. @@ -286,8 +286,8 @@ handle_call2({append_chunk, NSInfo, Prefix, Chunk, CSum, Opts, TO}, _From, S) -> do_append_head(NSInfo, Prefix, Chunk, CSum, Opts, 0, os:timestamp(), TO, S); -handle_call2({write_chunk, File, Offset, Chunk, TO}, _From, S) -> - do_write_head(File, Offset, Chunk, 0, os:timestamp(), TO, S); +handle_call2({write_chunk, NSInfo, File, Offset, Chunk, TO}, _From, S) -> + do_write_head(NSInfo, File, Offset, Chunk, 0, os:timestamp(), TO, S); handle_call2({read_chunk, NSInfo, File, Offset, Size, Opts, TO}, _From, S) -> do_read_chunk(NSInfo, File, Offset, Size, Opts, 0, os:timestamp(), TO, S); handle_call2({trim_chunk, File, Offset, Size, TO}, _From, S) -> @@ -439,7 +439,7 @@ do_append_midtail2([FLU|RestFLUs]=FLUs, NSInfo, CSum, Opts, Ws, Depth, STime, TO, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(FLU, PD), - case ?FLU_PC:write_chunk(Proxy, EpochID, File, Offset, Chunk, ?TIMEOUT) of + case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, ?TIMEOUT) of ok -> %% io:format(user, "write ~w,", [FLU]), do_append_midtail2(RestFLUs, NSInfo, Prefix, @@ -457,7 +457,7 @@ do_append_midtail2([FLU|RestFLUs]=FLUs, NSInfo, %% We know what the chunk ought to be, so jump to the %% middle of read-repair. Resume = {append, Offset, iolist_size(Chunk), File}, - do_repair_chunk(FLUs, Resume, Chunk, [], File, Offset, + do_repair_chunk(FLUs, Resume, Chunk, [], NSInfo, File, Offset, iolist_size(Chunk), Depth, STime, S); {error, trimmed} = Err -> %% TODO: nothing can be done @@ -483,9 +483,9 @@ witnesses_use_our_epoch([FLU|RestFLUs], false end. -do_write_head(File, Offset, Chunk, 0=Depth, STime, TO, S) -> - do_write_head2(File, Offset, Chunk, Depth + 1, STime, TO, S); -do_write_head(File, Offset, Chunk, Depth, STime, TO, #state{proj=P}=S) -> +do_write_head(NSInfo, File, Offset, Chunk, 0=Depth, STime, TO, S) -> + do_write_head2(NSInfo, File, Offset, Chunk, Depth + 1, STime, TO, S); +do_write_head(NSInfo, File, Offset, Chunk, Depth, STime, TO, #state{proj=P}=S) -> %% io:format(user, "head sleep1,", []), sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, @@ -500,23 +500,23 @@ do_write_head(File, Offset, Chunk, Depth, STime, TO, #state{proj=P}=S) -> case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_write_head(File, Offset, Chunk, Depth + 1, + do_write_head(NSInfo, File, Offset, Chunk, Depth + 1, STime, TO, S2); _ -> - do_write_head2(File, Offset, Chunk, Depth + 1, + do_write_head2(NSInfo, File, Offset, Chunk, Depth + 1, STime, TO, S2) end end. -do_write_head2(File, Offset, Chunk, Depth, STime, TO, +do_write_head2(NSInfo, File, Offset, Chunk, Depth, STime, TO, #state{epoch_id=EpochID, proj=P, proxies_dict=PD}=S) -> [HeadFLU|RestFLUs] = mutation_flus(P), Proxy = orddict:fetch(HeadFLU, PD), - case ?FLU_PC:write_chunk(Proxy, EpochID, File, Offset, Chunk, ?TIMEOUT) of + case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, ?TIMEOUT) of ok -> %% From this point onward, we use the same code & logic path as %% append does. -NSInfo=todo,Prefix=todo,CSum=todo,Opts=todo, +Prefix=todo_prefix,CSum=todo_csum,Opts=todo_opts, do_append_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, [HeadFLU], 0, STime, TO, S); @@ -524,7 +524,7 @@ NSInfo=todo,Prefix=todo,CSum=todo,Opts=todo, {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_write_head(File, Offset, Chunk, Depth, STime, TO, S); + do_write_head(NSInfo, File, Offset, Chunk, Depth, STime, TO, S); {error, written}=Err -> {reply, Err, S}; {error, trimmed}=Err -> @@ -751,7 +751,7 @@ read_repair2(cp_mode=ConsistencyMode, %% TODO: change to {Chunks, Trimmed} and have them repaired ToRepair = mutation_flus(P) -- [Tail], {Reply, S1} = do_repair_chunks(Chunks, ToRepair, ReturnMode, - [Tail], File, Depth, STime, S, {ok, Chunks}), + [Tail], NSInfo, File, Depth, STime, S, {ok, Chunks}), {reply, Reply, S1}; %% {ok, BadChunk} -> %% exit({todo, bad_chunk_size, ?MODULE, ?LINE, File, Offset, @@ -782,7 +782,7 @@ read_repair2(ap_mode=ConsistencyMode, %% TODO: Repair trimmed chunks ToRepair = mutation_flus(P) -- [GotItFrom], {Reply0, S1} = do_repair_chunks(Chunks, ToRepair, ReturnMode, [GotItFrom], - File, Depth, STime, S, {ok, Chunks}), + NSInfo, File, Depth, STime, S, {ok, Chunks}), {ok, Chunks} = Reply0, Reply = {ok, {Chunks, _Trimmed}}, {reply, Reply, S1}; @@ -803,20 +803,20 @@ read_repair2(ap_mode=ConsistencyMode, exit({todo_should_repair_unlinked_files, ?MODULE, ?LINE, File}) end. -do_repair_chunks([], _, _, _, _, _, _, S, Reply) -> +do_repair_chunks([], _, _, _, _, _, _, _, S, Reply) -> {Reply, S}; do_repair_chunks([{_, Offset, Chunk, _Csum}|T], - ToRepair, ReturnMode, [GotItFrom], File, Depth, STime, S, Reply) -> + ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S, Reply) -> Size = iolist_size(Chunk), - case do_repair_chunk(ToRepair, ReturnMode, Chunk, [GotItFrom], File, Offset, + case do_repair_chunk(ToRepair, ReturnMode, Chunk, [GotItFrom], NSInfo, File, Offset, Size, Depth, STime, S) of {ok, Chunk, S1} -> - do_repair_chunks(T, ToRepair, ReturnMode, [GotItFrom], File, Depth, STime, S1, Reply); + do_repair_chunks(T, ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S1, Reply); Error -> Error end. -do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, File, Offset, +do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, Size, Depth, STime, #state{proj=P}=S) -> %% io:format(user, "read_repair3 sleep1,", []), sleep_a_while(Depth), @@ -828,16 +828,16 @@ do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, File, Offset, case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, File, + do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, Size, Depth + 1, STime, S2); P2 -> ToRepair2 = mutation_flus(P2) -- Repaired, - do_repair_chunk2(ToRepair2, ReturnMode, Chunk, Repaired, File, + do_repair_chunk2(ToRepair2, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, Size, Depth + 1, STime, S2) end end. -do_repair_chunk2([], ReturnMode, Chunk, _Repaired, File, Offset, +do_repair_chunk2([], ReturnMode, Chunk, _Repaired, _NSInfo, File, Offset, _IgnoreSize, _Depth, _STime, S) -> %% TODO: add stats for # of repairs, length(_Repaired)-1, etc etc? case ReturnMode of @@ -846,24 +846,24 @@ do_repair_chunk2([], ReturnMode, Chunk, _Repaired, File, Offset, {append, Offset, Size, File} -> {ok, {Offset, Size, File}, S} end; -do_repair_chunk2([First|Rest]=ToRepair, ReturnMode, Chunk, Repaired, File, Offset, +do_repair_chunk2([First|Rest]=ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, Size, Depth, STime, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(First, PD), - case ?FLU_PC:write_chunk(Proxy, EpochID, File, Offset, Chunk, ?TIMEOUT) of + case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, ?TIMEOUT) of ok -> - do_repair_chunk2(Rest, ReturnMode, Chunk, [First|Repaired], File, + do_repair_chunk2(Rest, ReturnMode, Chunk, [First|Repaired], NSInfo, File, Offset, Size, Depth, STime, S); {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? {BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, File, + do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, Size, Depth, STime, S); {error, written} -> %% TODO: To be very paranoid, read the chunk here to verify %% that it is exactly our Chunk. - do_repair_chunk2(Rest, ReturnMode, Chunk, Repaired, File, + do_repair_chunk2(Rest, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, Size, Depth, STime, S); {error, trimmed} = _Error -> %% TODO diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index ffeebd9..76e27cf 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -80,7 +80,7 @@ ]). %% For "internal" replication only. -export([ - write_chunk/5, write_chunk/6, + write_chunk/6, write_chunk/7, trim_chunk/5, delete_migration/3, delete_migration/4, trunc_hack/3, trunc_hack/4 @@ -89,7 +89,7 @@ -type port_wrap() :: {w,atom(),term()}. -spec append_chunk(port_wrap(), - machi_dt:ns_info(), machi_dt:epoch_id(), + 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_csum()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. @@ -106,7 +106,7 @@ append_chunk(Sock, NSInfo, EpochID, Prefix, Chunk, CSum) -> %% `write_chunk()' API. -spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:ns_info(), machi_dt:epoch_id(), + 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_csum()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. @@ -115,7 +115,7 @@ append_chunk(Host, TcpPort, NSInfo, EpochID, Prefix, Chunk, CSum) -> #append_opts{}, ?LONG_TIMEOUT). -spec append_chunk(port_wrap(), - machi_dt:ns_info(), machi_dt:epoch_id(), + 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_csum(), machi_dt:append_opts(), timeout()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. @@ -132,7 +132,7 @@ append_chunk(Sock, NSInfo0, EpochID, Prefix, Chunk, CSum, Opts, Timeout) -> %% `write_chunk()' API. -spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:ns_info(), machi_dt:epoch_id(), + 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_csum(), machi_dt:append_opts(), timeout()) -> {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. @@ -461,23 +461,25 @@ disconnect(_) -> %% @doc Restricted API: Write a chunk of already-sequenced data to %% `File' at `Offset'. --spec write_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) -> +-spec write_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) -> ok | {error, machi_dt:error_general()} | {error, term()}. -write_chunk(Sock, EpochID, File, Offset, Chunk) +write_chunk(Sock, NSInfo0, EpochID, File, Offset, Chunk) when Offset >= ?MINIMUM_OFFSET -> - write_chunk2(Sock, EpochID, File, Offset, Chunk). + NSInfo = machi_util:ns_info_default(NSInfo0), + write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk). %% @doc Restricted API: Write a chunk of already-sequenced data to %% `File' at `Offset'. -spec write_chunk(machi_dt:inet_host(), machi_dt:inet_port(), - machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) -> + 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) -> ok | {error, machi_dt:error_general()} | {error, term()}. -write_chunk(Host, TcpPort, EpochID, File, Offset, Chunk) +write_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Chunk) when Offset >= ?MINIMUM_OFFSET -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), try - write_chunk2(Sock, EpochID, File, Offset, Chunk) + NSInfo = machi_util:ns_info_default(NSInfo0), + write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk) after disconnect(Sock) end. @@ -569,8 +571,9 @@ append_chunk2(Sock, NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, Opts}), do_pb_request_common(Sock, ReqID, Req, true, Timeout). -write_chunk2(Sock, EpochID, File0, Offset, Chunk0) -> +write_chunk2(Sock, NSInfo, EpochID, File0, Offset, Chunk0) -> ReqID = <<"id">>, + #ns_info{version=NSVersion, name=NS} = NSInfo, File = machi_util:make_binary(File0), true = (Offset >= ?MINIMUM_OFFSET), {Chunk, CSum_tag, CSum} = @@ -583,7 +586,7 @@ write_chunk2(Sock, EpochID, File0, Offset, Chunk0) -> end, Req = machi_pb_translate:to_pb_request( ReqID, - {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}), + {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}), do_pb_request_common(Sock, ReqID, Req). list2(Sock, EpochID) -> diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 2bf9653..ab03b1a 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -287,7 +287,7 @@ do_pb_ll_request3({low_append_chunk, NSVersion, NS, NSLocator, EpochID, {do_server_append_chunk(NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, Opts, S), S}; -do_pb_ll_request3({low_write_chunk, _EpochID, File, Offset, Chunk, CSum_tag, +do_pb_ll_request3({low_write_chunk, _NSVersion, _NS, _EpochID, File, Offset, Chunk, CSum_tag, CSum}, #state{witness=false}=S) -> {do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, S), S}; @@ -582,13 +582,16 @@ do_pb_hl_request2({high_auth, _User, _Pass}, S) -> do_pb_hl_request2({high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}, #state{high_clnt=Clnt}=S) -> NSInfo = #ns_info{name=NS}, % TODO populate other fields + io:format(user, "TODO fix broken append_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), Res = machi_cr_client:append_chunk(Clnt, NSInfo, Prefix, Chunk, TaggedCSum, Opts), {Res, S}; do_pb_hl_request2({high_write_chunk, File, Offset, ChunkBin, TaggedCSum}, #state{high_clnt=Clnt}=S) -> + NSInfo = undefined, + io:format(user, "TODO fix broken write_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), Chunk = {TaggedCSum, ChunkBin}, - Res = machi_cr_client:write_chunk(Clnt, File, Offset, Chunk), + Res = machi_cr_client:write_chunk(Clnt, NSInfo, File, Offset, Chunk), {Res, S}; do_pb_hl_request2({high_read_chunk, File, Offset, Size, Opts}, #state{high_clnt=Clnt}=S) -> diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 8d49460..d288e23 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -68,6 +68,8 @@ from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{ req_id=ReqID, write_chunk=#mpb_ll_writechunkreq{ + namespace_version=NSVersion, + namespace=NS, epoch_id=PB_EpochID, chunk=#mpb_chunk{file_name=File, offset=Offset, @@ -75,7 +77,7 @@ from_pb_request(#mpb_ll_request{ csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}}) -> EpochID = conv_to_epoch_id(PB_EpochID), CSum_tag = conv_to_csum_tag(CSum_type), - {ReqID, {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}}; + {ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, read_chunk=#mpb_ll_readchunkreq{ @@ -411,12 +413,14 @@ to_pb_request(ReqID, {low_append_chunk, NSVersion, NS, NSLocator, EpochID, chunk_extra=ChunkExtra, preferred_file_name=Pref, flag_fail_preferred=FailPref}}; -to_pb_request(ReqID, {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}) -> +to_pb_request(ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}) -> PB_EpochID = conv_from_epoch_id(EpochID), CSum_type = conv_from_csum_tag(CSum_tag), PB_CSum = #mpb_chunkcsum{type=CSum_type, csum=CSum}, #mpb_ll_request{req_id=ReqID, do_not_alter=2, write_chunk=#mpb_ll_writechunkreq{ + namespace_version=NSVersion, + namespace=NS, epoch_id=PB_EpochID, chunk=#mpb_chunk{file_name=File, offset=Offset, @@ -528,7 +532,7 @@ to_pb_response(ReqID, {low_append_chunk, _NSV, _NS, _NSL, _EID, _Pfx, _Ch, _CST, _Else -> make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -to_pb_response(ReqID, {low_write_chunk, _EID, _Fl, _Off, _Ch, _CST, _CS},Resp)-> +to_pb_response(ReqID, {low_write_chunk, _NSV, _NS, _EID, _Fl, _Off, _Ch, _CST, _CS},Resp)-> Status = conv_from_status(Resp), #mpb_ll_response{req_id=ReqID, write_chunk=#mpb_ll_writechunkresp{status=Status}}; diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index ba6c24e..f007cd5 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -77,7 +77,7 @@ quit/1, %% Internal API - write_chunk/5, write_chunk/6, + write_chunk/6, write_chunk/7, trim_chunk/5, trim_chunk/6, %% Helpers @@ -276,14 +276,14 @@ quit(PidSpec) -> %% @doc Write a chunk (binary- or iolist-style) of data to a file %% with `Prefix' at `Offset'. -write_chunk(PidSpec, EpochID, File, Offset, Chunk) -> - write_chunk(PidSpec, EpochID, File, Offset, Chunk, infinity). +write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk) -> + write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, infinity). %% @doc Write a chunk (binary- or iolist-style) of data to a file %% with `Prefix' at `Offset'. -write_chunk(PidSpec, EpochID, File, Offset, Chunk, Timeout) -> - case gen_server:call(PidSpec, {req, {write_chunk, EpochID, File, Offset, Chunk}}, +write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) -> + case gen_server:call(PidSpec, {req, {write_chunk, NSInfo, EpochID, File, Offset, Chunk}}, Timeout) of {error, written}=Err -> Size = byte_size(Chunk), @@ -382,9 +382,9 @@ make_req_fun({append_chunk, NSInfo, EpochID, make_req_fun({read_chunk, NSInfo, EpochID, File, Offset, Size, Opts}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:read_chunk(Sock, NSInfo, EpochID, File, Offset, Size, Opts) end; -make_req_fun({write_chunk, EpochID, File, Offset, Chunk}, +make_req_fun({write_chunk, NSInfo, EpochID, File, Offset, Chunk}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> - fun() -> Mod:write_chunk(Sock, EpochID, File, Offset, Chunk) end; + fun() -> Mod:write_chunk(Sock, NSInfo, EpochID, File, Offset, Chunk) end; make_req_fun({trim_chunk, EpochID, File, Offset, Size}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:trim_chunk(Sock, EpochID, File, Offset, Size) end; diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index def894e..440daab 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -138,7 +138,7 @@ smoke_test2() -> [{error, not_written} = machi_flu1_client:read_chunk( Host, PortBase+X, NSInfo, EpochID, File1, FooOff1, Size1, []) || X <- [0,1,2] ], - ok = machi_flu1_client:write_chunk(Host, PortBase+0, EpochID, + ok = machi_flu1_client:write_chunk(Host, PortBase+0, NSInfo, EpochID, File1, FooOff1, Chunk1), {ok, {[{_, FooOff1, Chunk1, _}], []}} = machi_flu1_client:read_chunk(Host, PortBase+0, NSInfo, EpochID, @@ -155,7 +155,7 @@ smoke_test2() -> FooOff2 = Off1 + (2*1024*1024), Chunk2 = <<"Middle repair chunk">>, Size2 = size(Chunk2), - ok = machi_flu1_client:write_chunk(Host, PortBase+1, EpochID, + ok = machi_flu1_client:write_chunk(Host, PortBase+1, NSInfo, EpochID, File1, FooOff2, Chunk2), {ok, {[{_, FooOff2, Chunk2, _}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, []), @@ -196,7 +196,7 @@ smoke_test2() -> %% {error,not_written} = machi_cr_client:read_chunk(C1, NSInfo, File10, %% Offx, Size10), {ok, {Offx,Size10,File10}} = - machi_cr_client:write_chunk(C1, File10, Offx, Chunk10), + machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10), {ok, {[{_, Offx, Chunk10, _}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File10, Offx, Size10, []) end || Seq <- lists:seq(1, Extra10)], @@ -286,7 +286,7 @@ witness_smoke_test2() -> File10 = File1, Offx = Off1 + (1 * Size10), {error, partition} = - machi_cr_client:write_chunk(C1, File10, Offx, Chunk10, 1*1000), + machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10, 1*1000), ok after diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index 45a6aa3..00b66e9 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -162,9 +162,9 @@ flu_smoke_test() -> Len2 = byte_size(Chunk2), Off2 = ?MINIMUM_OFFSET + 77, File2 = "smoke-whole-file^^0^1^1", - ok = ?FLU_C:write_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH, + ok = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Chunk2), - {error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH, + {error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, BadFile, Off2, Chunk2), {ok, {[{_, Off2, Chunk2, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, []), diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index 43bf97a..1f32ab6 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -282,20 +282,20 @@ flu_restart_test2() -> end, fun(run) -> ok = - ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1, + ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, Data, infinity), ok; (line) -> io:format("line ~p, ", [?LINE]); - (stop) -> ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1, + (stop) -> ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, Data, infinity) end, fun(run) -> {error, written} = - ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1, + ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, Dataxx, infinity), ok; (line) -> io:format("line ~p, ", [?LINE]); - (stop) -> ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1, + (stop) -> ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, Dataxx, infinity) end ], -- 2.45.2 From 0a8c4156c293ff2b1982138f09aad373bd3edc84 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 29 Dec 2015 14:06:02 +0900 Subject: [PATCH 05/53] trim_chunk API refactoring; all tests pass; todo tasks remain --- src/machi.proto | 15 ++++++--- src/machi_cr_client.erl | 56 ++++++++++++++++----------------- src/machi_flu1_client.erl | 10 +++--- src/machi_flu1_net_server.erl | 6 ++-- src/machi_pb_translate.erl | 10 ++++-- src/machi_proxy_flu1_client.erl | 14 ++++----- 6 files changed, 62 insertions(+), 49 deletions(-) diff --git a/src/machi.proto b/src/machi.proto index 1955230..716e696 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -454,11 +454,16 @@ message Mpb_LL_ReadChunkResp { // Low level API: trim_chunk() message Mpb_LL_TrimChunkReq { - required Mpb_EpochID epoch_id = 1; - required string file = 2; - required uint64 offset = 3; - required uint32 size = 4; - optional uint32 trigger_gc = 5 [default=0]; + // General namespace arguments + required uint32 namespace_version = 1; + required string namespace = 2; + + required Mpb_EpochID epoch_id = 10; + required string file = 11; + required uint64 offset = 12; + required uint32 size = 13; + + optional uint32 trigger_gc = 20 [default=0]; } message Mpb_LL_TrimChunkResp { diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 95d86f9..940d199 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -122,7 +122,7 @@ append_chunk/6, append_chunk/7, write_chunk/5, write_chunk/6, read_chunk/6, read_chunk/7, - trim_chunk/4, trim_chunk/5, + trim_chunk/5, trim_chunk/6, checksum_list/2, checksum_list/3, list_files/1, list_files/2, @@ -210,14 +210,14 @@ read_chunk(PidSpec, NSInfo, File, Offset, Size, Opts, Timeout0) -> %% @doc Trim a chunk of data of size `Size' from `File' at `Offset'. -trim_chunk(PidSpec, File, Offset, Size) -> - trim_chunk(PidSpec, File, Offset, Size, ?DEFAULT_TIMEOUT). +trim_chunk(PidSpec, NSInfo, File, Offset, Size) -> + trim_chunk(PidSpec, NSInfo, File, Offset, Size, ?DEFAULT_TIMEOUT). %% @doc Trim a chunk of data of size `Size' from `File' at `Offset'. -trim_chunk(PidSpec, File, Offset, Size, Timeout0) -> +trim_chunk(PidSpec, NSInfo, File, Offset, Size, Timeout0) -> {TO, Timeout} = timeout(Timeout0), - gen_server:call(PidSpec, {req, {trim_chunk, File, Offset, Size, TO}}, + gen_server:call(PidSpec, {req, {trim_chunk, NSInfo, File, Offset, Size, TO}}, Timeout). %% @doc Fetch the list of chunk checksums for `File'. @@ -290,8 +290,8 @@ handle_call2({write_chunk, NSInfo, File, Offset, Chunk, TO}, _From, S) -> do_write_head(NSInfo, File, Offset, Chunk, 0, os:timestamp(), TO, S); handle_call2({read_chunk, NSInfo, File, Offset, Size, Opts, TO}, _From, S) -> do_read_chunk(NSInfo, File, Offset, Size, Opts, 0, os:timestamp(), TO, S); -handle_call2({trim_chunk, File, Offset, Size, TO}, _From, S) -> - do_trim_chunk(File, Offset, Size, 0, os:timestamp(), TO, S); +handle_call2({trim_chunk, NSInfo, File, Offset, Size, TO}, _From, S) -> + do_trim_chunk(NSInfo, File, Offset, Size, 0, os:timestamp(), TO, S); handle_call2({checksum_list, File, TO}, _From, S) -> do_checksum_list(File, 0, os:timestamp(), TO, S); handle_call2({list_files, TO}, _From, S) -> @@ -593,10 +593,10 @@ do_read_chunk2(NSInfo, File, Offset, Size, Opts, Depth, STime, TO, {reply, Err, S} end. -do_trim_chunk(File, Offset, Size, 0=Depth, STime, TO, S) -> - do_trim_chunk(File, Offset, Size, Depth+1, STime, TO, S); +do_trim_chunk(NSInfo, File, Offset, Size, 0=Depth, STime, TO, S) -> + do_trim_chunk(NSInfo, File, Offset, Size, Depth+1, STime, TO, S); -do_trim_chunk(File, Offset, Size, Depth, STime, TO, #state{proj=P}=S) -> +do_trim_chunk(NSInfo, File, Offset, Size, Depth, STime, TO, #state{proj=P}=S) -> sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > TO -> @@ -610,40 +610,40 @@ do_trim_chunk(File, Offset, Size, Depth, STime, TO, #state{proj=P}=S) -> case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_trim_chunk(File, Offset, Size, Depth + 1, + do_trim_chunk(NSInfo, File, Offset, Size, Depth + 1, STime, TO, S2); _ -> - do_trim_chunk2(File, Offset, Size, Depth + 1, + do_trim_chunk2(NSInfo, File, Offset, Size, Depth + 1, STime, TO, S2) end end. -do_trim_chunk2(File, Offset, Size, Depth, STime, TO, +do_trim_chunk2(NSInfo, File, Offset, Size, Depth, STime, TO, #state{epoch_id=EpochID, proj=P, proxies_dict=PD}=S) -> [HeadFLU|RestFLUs] = mutation_flus(P), Proxy = orddict:fetch(HeadFLU, PD), - case ?FLU_PC:trim_chunk(Proxy, EpochID, File, Offset, Size, ?TIMEOUT) of + case ?FLU_PC:trim_chunk(Proxy, NSInfo, EpochID, File, Offset, Size, ?TIMEOUT) of ok -> - do_trim_midtail(RestFLUs, undefined, File, Offset, Size, + do_trim_midtail(RestFLUs, undefined, NSInfo, File, Offset, Size, [HeadFLU], 0, STime, TO, S); {error, trimmed} -> %% Maybe the trim had failed in the middle of the tail so re-run %% trim accross the whole chain. - do_trim_midtail(RestFLUs, undefined, File, Offset, Size, + do_trim_midtail(RestFLUs, undefined, NSInfo, File, Offset, Size, [HeadFLU], 0, STime, TO, S); {error, bad_checksum}=BadCS -> {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_trim_chunk(File, Offset, Size, Depth, STime, TO, S) + do_trim_chunk(NSInfo, File, Offset, Size, Depth, STime, TO, S) end. -do_trim_midtail(RestFLUs, Prefix, File, Offset, Size, +do_trim_midtail(RestFLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth, STime, TO, S) when RestFLUs == [] orelse Depth == 0 -> - do_trim_midtail2(RestFLUs, Prefix, File, Offset, Size, + do_trim_midtail2(RestFLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth + 1, STime, TO, S); -do_trim_midtail(_RestFLUs, Prefix, File, Offset, Size, +do_trim_midtail(_RestFLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth, STime, TO, #state{proj=P}=S) -> %% io:format(user, "midtail sleep2,", []), sleep_a_while(Depth), @@ -670,38 +670,38 @@ do_trim_midtail(_RestFLUs, Prefix, File, Offset, Size, if Prefix == undefined -> % atom! not binary()!! {error, partition}; true -> - do_trim_chunk(Prefix, Offset, Size, + do_trim_chunk(NSInfo, Prefix, Offset, Size, Depth, STime, TO, S2) end; RestFLUs3 -> - do_trim_midtail2(RestFLUs3, Prefix, File, Offset, Size, + do_trim_midtail2(RestFLUs3, Prefix, NSInfo, File, Offset, Size, Ws, Depth + 1, STime, TO, S2) end end end. -do_trim_midtail2([], _Prefix, _File, _Offset, _Size, +do_trim_midtail2([], _Prefix, _NSInfo, _File, _Offset, _Size, _Ws, _Depth, _STime, _TO, S) -> %% io:format(user, "ok!\n", []), {reply, ok, S}; -do_trim_midtail2([FLU|RestFLUs]=FLUs, Prefix, File, Offset, Size, +do_trim_midtail2([FLU|RestFLUs]=FLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth, STime, TO, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(FLU, PD), - case ?FLU_PC:trim_chunk(Proxy, EpochID, File, Offset, Size, ?TIMEOUT) of + case ?FLU_PC:trim_chunk(Proxy, NSInfo, EpochID, File, Offset, Size, ?TIMEOUT) of ok -> %% io:format(user, "write ~w,", [FLU]), - do_trim_midtail2(RestFLUs, Prefix, File, Offset, Size, + do_trim_midtail2(RestFLUs, Prefix, NSInfo, File, Offset, Size, [FLU|Ws], Depth, STime, TO, S); {error, trimmed} -> - do_trim_midtail2(RestFLUs, Prefix, File, Offset, Size, + do_trim_midtail2(RestFLUs, Prefix, NSInfo, File, Offset, Size, [FLU|Ws], Depth, STime, TO, S); {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_trim_midtail(FLUs, Prefix, File, Offset, Size, + do_trim_midtail(FLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth, STime, TO, S) end. diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index 76e27cf..550c652 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -81,7 +81,7 @@ %% For "internal" replication only. -export([ write_chunk/6, write_chunk/7, - trim_chunk/5, + trim_chunk/6, delete_migration/3, delete_migration/4, trunc_hack/3, trunc_hack/4 ]). @@ -487,16 +487,18 @@ write_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Chunk) %% @doc Restricted API: Write a chunk of already-sequenced data to %% `File' at `Offset'. --spec trim_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size()) -> +-spec trim_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size()) -> ok | {error, machi_dt:error_general()} | {error, term()}. -trim_chunk(Sock, EpochID, File0, Offset, Size) +trim_chunk(Sock, NSInfo0, EpochID, File0, Offset, Size) when Offset >= ?MINIMUM_OFFSET -> ReqID = <<"id">>, + NSInfo = machi_util:ns_info_default(NSInfo0), + #ns_info{version=NSVersion, name=NS} = NSInfo, File = machi_util:make_binary(File0), true = (Offset >= ?MINIMUM_OFFSET), Req = machi_pb_translate:to_pb_request( ReqID, - {low_trim_chunk, EpochID, File, Offset, Size, 0}), + {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, 0}), do_pb_request_common(Sock, ReqID, Req). %% @doc Restricted API: Delete a file after it has been successfully diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index ab03b1a..26c5bc2 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -294,7 +294,7 @@ do_pb_ll_request3({low_write_chunk, _NSVersion, _NS, _EpochID, File, Offset, Chu do_pb_ll_request3({low_read_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size, Opts}, #state{witness=false} = S) -> {do_server_read_chunk(File, Offset, Size, Opts, S), S}; -do_pb_ll_request3({low_trim_chunk, _EpochID, File, Offset, Size, TriggerGC}, +do_pb_ll_request3({low_trim_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size, TriggerGC}, #state{witness=false}=S) -> {do_server_trim_chunk(File, Offset, Size, TriggerGC, S), S}; do_pb_ll_request3({low_checksum_list, _EpochID, File}, @@ -601,7 +601,9 @@ do_pb_hl_request2({high_read_chunk, File, Offset, Size, Opts}, {Res, S}; do_pb_hl_request2({high_trim_chunk, File, Offset, Size}, #state{high_clnt=Clnt}=S) -> - Res = machi_cr_client:trim_chunk(Clnt, File, Offset, Size), + NSInfo = undefined, + io:format(user, "TODO fix broken trim_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + Res = machi_cr_client:trim_chunk(Clnt, NSInfo, File, Offset, Size), {Res, S}; do_pb_hl_request2({high_checksum_list, File}, #state{high_clnt=Clnt}=S) -> Res = machi_cr_client:checksum_list(Clnt, File), diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index d288e23..6d5ab39 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -99,6 +99,8 @@ from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{ req_id=ReqID, trim_chunk=#mpb_ll_trimchunkreq{ + namespace_version=NSVersion, + namespace=NS, epoch_id=PB_EpochID, file=File, offset=Offset, @@ -106,7 +108,7 @@ from_pb_request(#mpb_ll_request{ trigger_gc=PB_TriggerGC}}) -> EpochID = conv_to_epoch_id(PB_EpochID), TriggerGC = conv_to_boolean(PB_TriggerGC), - {ReqID, {low_trim_chunk, EpochID, File, Offset, Size, TriggerGC}}; + {ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, checksum_list=#mpb_ll_checksumlistreq{ @@ -444,10 +446,12 @@ to_pb_request(ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size flag_no_checksum=machi_util:bool2int(FNChecksum), flag_no_chunk=machi_util:bool2int(FNChunk), flag_needs_trimmed=machi_util:bool2int(NeedsTrimmed)}}; -to_pb_request(ReqID, {low_trim_chunk, EpochID, File, Offset, Size, TriggerGC}) -> +to_pb_request(ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}) -> PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_request{req_id=ReqID, do_not_alter=2, trim_chunk=#mpb_ll_trimchunkreq{ + namespace_version=NSVersion, + namespace=NS, epoch_id=PB_EpochID, file=File, offset=Offset, @@ -563,7 +567,7 @@ to_pb_response(ReqID, {low_read_chunk, _NSV, _NS, _EID, _Fl, _Off, _Sz, _Opts}, _Else -> make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -to_pb_response(ReqID, {low_trim_chunk, _, _, _, _, _}, Resp) -> +to_pb_response(ReqID, {low_trim_chunk, _, _, _, _, _, _, _}, Resp) -> case Resp of ok -> #mpb_ll_response{req_id=ReqID, diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index f007cd5..99c277a 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -78,7 +78,7 @@ %% Internal API write_chunk/6, write_chunk/7, - trim_chunk/5, trim_chunk/6, + trim_chunk/6, trim_chunk/7, %% Helpers stop_proxies/1, start_proxies/1 @@ -301,15 +301,15 @@ write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) -> end. -trim_chunk(PidSpec, EpochID, File, Offset, Size) -> - trim_chunk(PidSpec, EpochID, File, Offset, Size, infinity). +trim_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size) -> + trim_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, infinity). %% @doc Write a chunk (binary- or iolist-style) of data to a file %% with `Prefix' at `Offset'. -trim_chunk(PidSpec, EpochID, File, Offset, Chunk, Timeout) -> +trim_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) -> gen_server:call(PidSpec, - {req, {trim_chunk, EpochID, File, Offset, Chunk}}, + {req, {trim_chunk, NSInfo, EpochID, File, Offset, Chunk}}, Timeout). %%%%%%%%%%%%%%%%%%%%%%%%%%% @@ -385,9 +385,9 @@ make_req_fun({read_chunk, NSInfo, EpochID, File, Offset, Size, Opts}, make_req_fun({write_chunk, NSInfo, EpochID, File, Offset, Chunk}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:write_chunk(Sock, NSInfo, EpochID, File, Offset, Chunk) end; -make_req_fun({trim_chunk, EpochID, File, Offset, Size}, +make_req_fun({trim_chunk, NSInfo, EpochID, File, Offset, Size}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> - fun() -> Mod:trim_chunk(Sock, EpochID, File, Offset, Size) end; + fun() -> Mod:trim_chunk(Sock, NSInfo, EpochID, File, Offset, Size) end; make_req_fun({checksum_list, EpochID, File}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:checksum_list(Sock, EpochID, File) end; -- 2.45.2 From 5a65a164c3ee1760aebe3ece8b5d28aadd7c6ac2 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 29 Dec 2015 16:01:52 +0900 Subject: [PATCH 06/53] Remove straggler CoC items in code --- src/machi.proto | 2 ++ src/machi_dt.erl | 1 + src/machi_flu1_net_server.erl | 2 +- src/machi_flu_metadata_mgr.erl | 8 +++--- src/machi_util.erl | 39 +++++++++++++++--------------- test/machi_pb_high_client_test.erl | 2 -- 6 files changed, 27 insertions(+), 27 deletions(-) diff --git a/src/machi.proto b/src/machi.proto index 716e696..42f38c8 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -254,6 +254,8 @@ message Mpb_ChecksumListResp { // High level API: list_files() request & response message Mpb_ListFilesReq { + // TODO: Add flag for file glob/regexp/other filter type + // TODO: What else could go wrong? } message Mpb_ListFilesResp { diff --git a/src/machi_dt.erl b/src/machi_dt.erl index 13e7836..8becb14 100644 --- a/src/machi_dt.erl +++ b/src/machi_dt.erl @@ -77,6 +77,7 @@ file_prefix/0, inet_host/0, inet_port/0, + locator/0, namespace/0, namespace_version/0, ns_info/0, diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 26c5bc2..827807d 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -220,7 +220,7 @@ do_pb_ll_request(PB_request, S) -> {Rs, NewS} = do_pb_ll_request3(Cmd0, S), {RqID, Cmd0, Rs, NewS}; {RqID, Cmd0} -> - io:format(user, "TODO: epoch at pos 2 in tuple is probably broken now, wheeeeeeeeeeeeeeeeee\n", []), + io:format(user, "TODO: epoch at pos 2 in tuple is probably broken now, whee: ~p\n", [Cmd0]), EpochID = element(2, Cmd0), % by common convention {Rs, NewS} = do_pb_ll_request2(EpochID, Cmd0, S), {RqID, Cmd0, Rs, NewS} diff --git a/src/machi_flu_metadata_mgr.erl b/src/machi_flu_metadata_mgr.erl index 66274b3..b9c26c9 100644 --- a/src/machi_flu_metadata_mgr.erl +++ b/src/machi_flu_metadata_mgr.erl @@ -34,6 +34,7 @@ -module(machi_flu_metadata_mgr). -behaviour(gen_server). +-include("machi.hrl"). -define(MAX_MGRS, 10). %% number of managers to start by default. -define(HASH(X), erlang:phash2(X)). %% hash algorithm to use @@ -185,17 +186,16 @@ handle_info({'DOWN', Mref, process, Pid, file_rollover}, State = #state{ fluname tid = Tid }) -> lager:info("file proxy ~p shutdown because of file rollover", [Pid]), R = get_md_record_by_mref(Tid, Mref), - {Prefix, CoC_Namespace, CoC_Locator, _, _} = + {Prefix, NS, NSLocator, _, _} = machi_util:parse_filename(R#md.filename), - %% CoC_Namespace = list_to_binary(CoC_Namespace_str), - %% CoC_Locator = list_to_integer(CoC_Locator_str), %% We only increment the counter here. The filename will be generated on the %% next append request to that prefix and since the filename will have a new %% sequence number it probably will be associated with a different metadata %% manager. That's why we don't want to generate a new file name immediately %% and use it to start a new file proxy. - ok = machi_flu_filename_mgr:increment_prefix_sequence(FluName, {coc, CoC_Namespace, CoC_Locator}, {prefix, Prefix}), + NSInfo = #ns_info{name=NS, locator=NSLocator}, + ok = machi_flu_filename_mgr:increment_prefix_sequence(FluName, NSInfo, {prefix, Prefix}), %% purge our ets table of this entry completely since it is likely the %% new filename (whenever it comes) will be in a different manager than diff --git a/src/machi_util.erl b/src/machi_util.erl index 59f7f2e..d4704d9 100644 --- a/src/machi_util.erl +++ b/src/machi_util.erl @@ -71,10 +71,10 @@ make_regname(Prefix) when is_list(Prefix) -> -spec make_config_filename(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> string(). -make_config_filename(DataDir, NS, Locator, Prefix) -> - Locator_str = int_to_hexstr(Locator, 32), +make_config_filename(DataDir, NS, NSLocator, Prefix) -> + NSLocator_str = int_to_hexstr(NSLocator, 32), lists:flatten(io_lib:format("~s/config/~s^~s^~s", - [DataDir, Prefix, NS, Locator_str])). + [DataDir, Prefix, NS, NSLocator_str])). %% @doc Calculate a config file path, by common convention. @@ -105,17 +105,17 @@ make_checksum_filename(DataDir, FileName) -> -spec make_data_filename(string(), machi_dt:namespace(), machi_dt:locator(), string(), atom()|string()|binary(), integer()|string()) -> {binary(), string()}. -make_data_filename(DataDir, NS, Locator, Prefix, SequencerName, FileNum) +make_data_filename(DataDir, NS, NSLocator, Prefix, SequencerName, FileNum) when is_integer(FileNum) -> - Locator_str = int_to_hexstr(Locator, 32), + NSLocator_str = int_to_hexstr(NSLocator, 32), File = erlang:iolist_to_binary(io_lib:format("~s^~s^~s^~s^~w", - [Prefix, NS, Locator_str, SequencerName, FileNum])), + [Prefix, NS, NSLocator_str, SequencerName, FileNum])), make_data_filename2(DataDir, File); -make_data_filename(DataDir, NS, Locator, Prefix, SequencerName, String) +make_data_filename(DataDir, NS, NSLocator, Prefix, SequencerName, String) when is_list(String) -> - Locator_str = int_to_hexstr(Locator, 32), + NSLocator_str = int_to_hexstr(NSLocator, 32), File = erlang:iolist_to_binary(io_lib:format("~s^~s^~s^~s^~s", - [Prefix, NS, Locator_str, SequencerName, string])), + [Prefix, NS, NSLocator_str, SequencerName, string])), make_data_filename2(DataDir, File). make_data_filename2(DataDir, File) -> @@ -155,8 +155,8 @@ is_valid_filename(Filename) -> %% The components will be: %%
    %%
  • Prefix
  • -%%
  • CoC Namespace
  • -%%
  • CoC locator
  • +%%
  • Cluster namespace
  • +%%
  • Cluster locator
  • %%
  • UUID
  • %%
  • Sequence number
  • %%
@@ -165,27 +165,26 @@ is_valid_filename(Filename) -> -spec parse_filename( Filename :: string() ) -> {} | {string(), machi_dt:namespace(), machi_dt:locator(), string(), string() }. parse_filename(Filename) -> case string:tokens(Filename, "^") of - [Prefix, CoC_NS, CoC_Loc, UUID, SeqNo] -> - {Prefix, CoC_NS, list_to_integer(CoC_Loc), UUID, SeqNo}; - [Prefix, CoC_Loc, UUID, SeqNo] -> + [Prefix, NS, NSLocator, UUID, SeqNo] -> + {Prefix, NS, list_to_integer(NSLocator), UUID, SeqNo}; + [Prefix, NSLocator, UUID, SeqNo] -> %% string:tokens() doesn't consider "foo^^bar" as 3 tokens {sigh} case re:replace(Filename, "[^^]+", "x", [global,{return,binary}]) of <<"x^^x^x^x">> -> - {Prefix, <<"">>, list_to_integer(CoC_Loc), UUID, SeqNo}; + {Prefix, <<"">>, list_to_integer(NSLocator), UUID, SeqNo}; _ -> {} end; _ -> {} end. - %% @doc Read the file size of a config file, which is used as the %% basis for a minimum sequence number. -spec read_max_filenum(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> non_neg_integer(). -read_max_filenum(DataDir, NS, Locator, Prefix) -> - case file:read_file_info(make_config_filename(DataDir, NS, Locator, Prefix)) of +read_max_filenum(DataDir, NS, NSLocator, Prefix) -> + case file:read_file_info(make_config_filename(DataDir, NS, NSLocator, Prefix)) of {error, enoent} -> 0; {ok, FI} -> @@ -197,9 +196,9 @@ read_max_filenum(DataDir, NS, Locator, Prefix) -> -spec increment_max_filenum(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> ok | {error, term()}. -increment_max_filenum(DataDir, NS, Locator, Prefix) -> +increment_max_filenum(DataDir, NS, NSLocator, Prefix) -> try - {ok, FH} = file:open(make_config_filename(DataDir, NS, Locator, Prefix), [append]), + {ok, FH} = file:open(make_config_filename(DataDir, NS, NSLocator, Prefix), [append]), ok = file:write(FH, "x"), ok = file:sync(FH), ok = file:close(FH) diff --git a/test/machi_pb_high_client_test.erl b/test/machi_pb_high_client_test.erl index 2371076..ba2f10a 100644 --- a/test/machi_pb_high_client_test.erl +++ b/test/machi_pb_high_client_test.erl @@ -56,8 +56,6 @@ smoke_test2() -> %% a separate test module? Or separate test func? {error, _} = ?C:auth(Clnt, "foo", "bar"), - CoC_n = "", % CoC_namespace (not implemented) - CoC_l = 0, % CoC_locator (not implemented) Prefix = <<"prefix">>, Chunk1 = <<"Hello, chunk!">>, NS = "", -- 2.45.2 From e24acb7246397f89f94f049a704f5c0a7c4e7024 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 29 Dec 2015 17:24:09 +0900 Subject: [PATCH 07/53] Clean up internal protocol<->tuple mappings for correct epoch checking --- src/machi.proto | 3 +- src/machi_admin_util.erl | 4 +- src/machi_chain_repair.erl | 4 +- src/machi_cr_client.erl | 4 +- src/machi_flu1_client.erl | 32 ++++++++------- src/machi_flu1_net_server.erl | 46 +++++++++------------- src/machi_pb_translate.erl | 56 +++++++++++++-------------- src/machi_proxy_flu1_client.erl | 14 +++---- src/machi_yessir_client.erl | 8 ++-- test/machi_flu1_test.erl | 13 ++----- test/machi_flu_psup_test.erl | 4 +- test/machi_proxy_flu1_client_test.erl | 6 +-- 12 files changed, 90 insertions(+), 104 deletions(-) diff --git a/src/machi.proto b/src/machi.proto index 42f38c8..58e6175 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -475,8 +475,7 @@ message Mpb_LL_TrimChunkResp { // Low level API: checksum_list() message Mpb_LL_ChecksumListReq { - required Mpb_EpochID epoch_id = 1; - required string file = 2; + required string file = 1; } message Mpb_LL_ChecksumListResp { diff --git a/src/machi_admin_util.erl b/src/machi_admin_util.erl index 9bafd8a..19264cb 100644 --- a/src/machi_admin_util.erl +++ b/src/machi_admin_util.erl @@ -98,9 +98,9 @@ verify_file_checksums_remote2(Sock1, EpochID, File) -> end, verify_file_checksums_common(Sock1, EpochID, File, ReadChunk). -verify_file_checksums_common(Sock1, EpochID, File, ReadChunk) -> +verify_file_checksums_common(Sock1, _EpochID, File, ReadChunk) -> try - case ?FLU_C:checksum_list(Sock1, EpochID, File) of + case ?FLU_C:checksum_list(Sock1, File) of {ok, InfoBin} -> Info = machi_csum_table:split_checksum_list_blob_decode(InfoBin), Res = lists:foldl(verify_chunk_checksum(File, ReadChunk), diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index 1f6dfc6..040c1da 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -207,7 +207,7 @@ make_repair_compare_fun(SrcFLU) -> T_a =< T_b end. -make_repair_directives(ConsistencyMode, RepairMode, File, Size, EpochID, +make_repair_directives(ConsistencyMode, RepairMode, File, Size, _EpochID, Verb, Src, FLUs0, ProxiesDict, ETS) -> true = (Size < ?MAX_OFFSET), FLUs = lists:usort(FLUs0), @@ -216,7 +216,7 @@ make_repair_directives(ConsistencyMode, RepairMode, File, Size, EpochID, Proxy = orddict:fetch(FLU, ProxiesDict), OffSzCs = case machi_proxy_flu1_client:checksum_list( - Proxy, EpochID, File, ?LONG_TIMEOUT) of + Proxy, File, ?LONG_TIMEOUT) of {ok, InfoBin} -> machi_csum_table:split_checksum_list_blob_decode(InfoBin); {error, no_such_file} -> diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 940d199..173ed77 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -895,9 +895,9 @@ do_checksum_list(File, Depth, STime, TO, #state{proj=P}=S) -> end. do_checksum_list2(File, Depth, STime, TO, - #state{epoch_id=EpochID, proj=P, proxies_dict=PD}=S) -> + #state{proj=P, proxies_dict=PD}=S) -> Proxy = orddict:fetch(lists:last(readonly_flus(P)), PD), - case ?FLU_PC:checksum_list(Proxy, EpochID, File, ?TIMEOUT) of + case ?FLU_PC:checksum_list(Proxy, File, ?TIMEOUT) of {ok, _}=OK -> {reply, OK, S}; {error, Retry} diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index 550c652..74a6323 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -58,7 +58,7 @@ append_chunk/6, append_chunk/7, append_chunk/8, append_chunk/9, read_chunk/7, read_chunk/8, - checksum_list/3, checksum_list/4, + checksum_list/2, checksum_list/3, list_files/2, list_files/3, wedge_status/1, wedge_status/2, @@ -180,12 +180,12 @@ io:format(user, "dbgyo ~s LINE ~p NSInfo0 ~p NSInfo ~p\n", [?MODULE, ?LINE, NSIn %% @doc Fetch the list of chunk checksums for `File'. --spec checksum_list(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name()) -> +-spec checksum_list(port_wrap(), machi_dt:file_name()) -> {ok, binary()} | {error, machi_dt:error_general() | 'no_such_file' | 'partial_read'} | {error, term()}. -checksum_list(Sock, EpochID, File) -> - checksum_list2(Sock, EpochID, File). +checksum_list(Sock, File) -> + checksum_list2(Sock, File). %% @doc Fetch the list of chunk checksums for `File'. %% @@ -209,13 +209,13 @@ checksum_list(Sock, EpochID, File) -> %% Details of the encoding used inside the `binary()' blog can be found %% in the EDoc comments for {@link machi_flu1:decode_csum_file_entry/1}. --spec checksum_list(machi_dt:inet_host(), machi_dt:inet_port(), machi_dt:epoch_id(), machi_dt:file_name()) -> +-spec checksum_list(machi_dt:inet_host(), machi_dt:inet_port(), machi_dt:file_name()) -> {ok, binary()} | {error, machi_dt:error_general() | 'no_such_file'} | {error, term()}. -checksum_list(Host, TcpPort, EpochID, File) when is_integer(TcpPort) -> +checksum_list(Host, TcpPort, File) when is_integer(TcpPort) -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), try - checksum_list2(Sock, EpochID, File) + checksum_list2(Sock, File) after disconnect(Sock) end. @@ -567,9 +567,11 @@ append_chunk2(Sock, NSInfo, EpochID, machi_util:unmake_tagged_csum(CSum0) end, #ns_info{version=NSVersion, name=NS, locator=NSLocator} = NSInfo, + %% NOTE: The tuple position of NSLocator is a bit odd, because EpochID + %% _must_ be in the 4th position (as NSV & NS must be in 2nd & 3rd). Req = machi_pb_translate:to_pb_request( ReqID, - {low_append_chunk, NSVersion, NS, NSLocator, EpochID, + {low_append_chunk, NSVersion, NS, EpochID, NSLocator, Prefix, Chunk, CSum_tag, CSum, Opts}), do_pb_request_common(Sock, ReqID, Req, true, Timeout). @@ -594,37 +596,37 @@ write_chunk2(Sock, NSInfo, EpochID, File0, Offset, Chunk0) -> list2(Sock, EpochID) -> ReqID = <<"id">>, Req = machi_pb_translate:to_pb_request( - ReqID, {low_list_files, EpochID}), + ReqID, {low_skip_wedge, {low_list_files, EpochID}}), do_pb_request_common(Sock, ReqID, Req). wedge_status2(Sock) -> ReqID = <<"id">>, Req = machi_pb_translate:to_pb_request( - ReqID, {low_wedge_status, undefined}), + ReqID, {low_skip_wedge, {low_wedge_status}}), do_pb_request_common(Sock, ReqID, Req). echo2(Sock, Message) -> ReqID = <<"id">>, Req = machi_pb_translate:to_pb_request( - ReqID, {low_echo, undefined, Message}), + ReqID, {low_skip_wedge, {low_echo, Message}}), do_pb_request_common(Sock, ReqID, Req). -checksum_list2(Sock, EpochID, File) -> +checksum_list2(Sock, File) -> ReqID = <<"id">>, Req = machi_pb_translate:to_pb_request( - ReqID, {low_checksum_list, EpochID, File}), + ReqID, {low_skip_wedge, {low_checksum_list, File}}), do_pb_request_common(Sock, ReqID, Req). delete_migration2(Sock, EpochID, File) -> ReqID = <<"id">>, Req = machi_pb_translate:to_pb_request( - ReqID, {low_delete_migration, EpochID, File}), + ReqID, {low_skip_wedge, {low_delete_migration, EpochID, File}}), do_pb_request_common(Sock, ReqID, Req). trunc_hack2(Sock, EpochID, File) -> ReqID = <<"id-trunc">>, Req = machi_pb_translate:to_pb_request( - ReqID, {low_trunc_hack, EpochID, File}), + ReqID, {low_skip_wedge, {low_trunc_hack, EpochID, File}}), do_pb_request_common(Sock, ReqID, Req). get_latest_epochid2(Sock, ProjType) -> diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 827807d..8ec237e 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -209,31 +209,31 @@ do_pb_ll_request(#mpb_ll_request{req_id=ReqID}, #state{pb_mode=high}=S) -> {machi_pb_translate:to_pb_response(ReqID, unused, Result), S}; do_pb_ll_request(PB_request, S) -> Req = machi_pb_translate:from_pb_request(PB_request), - %% io:format(user, "[~w] do_pb_ll_request Req: ~w~n", [S#state.flu_name, Req]), {ReqID, Cmd, Result, S2} = case Req of - {RqID, {LowCmd, _}=Cmd0} - when LowCmd =:= low_proj; - LowCmd =:= low_wedge_status; - LowCmd =:= low_list_files -> + {RqID, {low_skip_wedge, LowSubCmd}=Cmd0} -> %% Skip wedge check for these unprivileged commands + {Rs, NewS} = do_pb_ll_request3(LowSubCmd, S), + {RqID, Cmd0, Rs, NewS}; + {RqID, {low_proj, _LowSubCmd}=Cmd0} -> {Rs, NewS} = do_pb_ll_request3(Cmd0, S), {RqID, Cmd0, Rs, NewS}; {RqID, Cmd0} -> - io:format(user, "TODO: epoch at pos 2 in tuple is probably broken now, whee: ~p\n", [Cmd0]), - EpochID = element(2, Cmd0), % by common convention - {Rs, NewS} = do_pb_ll_request2(EpochID, Cmd0, S), + %% All remaining must have NSVersion, NS, & EpochID at next pos + NSVersion = element(2, Cmd0), + NS = element(3, Cmd0), + EpochID = element(4, Cmd0), + {Rs, NewS} = do_pb_ll_request2(NSVersion, NS, EpochID, Cmd0, S), {RqID, Cmd0, Rs, NewS} end, {machi_pb_translate:to_pb_response(ReqID, Cmd, Result), S2}. -do_pb_ll_request2(EpochID, CMD, S) -> +do_pb_ll_request2(NSVersion, NS, EpochID, CMD, S) -> {Wedged_p, CurrentEpochID} = lookup_epoch(S), - %% io:format(user, "{Wedged_p, CurrentEpochID}: ~w~n", [{Wedged_p, CurrentEpochID}]), - if Wedged_p == true -> + if not is_tuple(EpochID) orelse tuple_size(EpochID) /= 2 -> + exit({bad_epoch_id, EpochID, for, CMD}); + Wedged_p == true -> {{error, wedged}, S#state{epoch_id=CurrentEpochID}}; - is_tuple(EpochID) - andalso EpochID /= CurrentEpochID -> {Epoch, _} = EpochID, {CurrentEpoch, _} = CurrentEpochID, @@ -259,30 +259,20 @@ do_pb_ll_request2b(CMD, S) -> do_pb_ll_request3(CMD, S). %% Witness status does not matter below. -do_pb_ll_request3({low_echo, _BogusEpochID, Msg}, S) -> +do_pb_ll_request3({low_echo, Msg}, S) -> {Msg, S}; -do_pb_ll_request3({low_auth, _BogusEpochID, _User, _Pass}, S) -> +do_pb_ll_request3({low_auth, _User, _Pass}, S) -> {-6, S}; -do_pb_ll_request3({low_wedge_status, _EpochID}, S) -> +do_pb_ll_request3({low_wedge_status}, S) -> {do_server_wedge_status(S), S}; do_pb_ll_request3({low_proj, PCMD}, S) -> {do_server_proj_request(PCMD, S), S}; %% Witness status *matters* below -do_pb_ll_request3({low_append_chunk, NSVersion, NS, NSLocator, EpochID, +do_pb_ll_request3({low_append_chunk, NSVersion, NS, EpochID, NSLocator, Prefix, Chunk, CSum_tag, CSum, Opts}, #state{witness=false}=S) -> - %% io:format(user, " - %% append_chunk namespace_version=~p - %% namespace=~p - %% locator=~p - %% epoch_id=~p - %% prefix=~p - %% chunk=~p - %% csum={~p,~p} - %% opts=~p\n", - %% [NSVersion, NS, NSLocator, EpochID, Prefix, Chunk, CSum_tag, CSum, Opts]), NSInfo = #ns_info{version=NSVersion, name=NS, locator=NSLocator}, {do_server_append_chunk(NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, @@ -297,7 +287,7 @@ do_pb_ll_request3({low_read_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size do_pb_ll_request3({low_trim_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size, TriggerGC}, #state{witness=false}=S) -> {do_server_trim_chunk(File, Offset, Size, TriggerGC, S), S}; -do_pb_ll_request3({low_checksum_list, _EpochID, File}, +do_pb_ll_request3({low_checksum_list, File}, #state{witness=false}=S) -> {do_server_checksum_listing(File, S), S}; do_pb_ll_request3({low_list_files, _EpochID}, diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 6d5ab39..54690b1 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -45,11 +45,11 @@ from_pb_request(#mpb_ll_request{ req_id=ReqID, echo=#mpb_echoreq{message=Msg}}) -> - {ReqID, {low_echo, undefined, Msg}}; + {ReqID, {low_skip_wedge, {low_echo, Msg}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, auth=#mpb_authreq{user=User, password=Pass}}) -> - {ReqID, {low_auth, undefined, User, Pass}}; + {ReqID, {low_skip_wedge, {low_auth, User, Pass}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, append_chunk=IR=#mpb_ll_appendchunkreq{ @@ -63,7 +63,9 @@ from_pb_request(#mpb_ll_request{ EpochID = conv_to_epoch_id(PB_EpochID), CSum_tag = conv_to_csum_tag(CSum_type), Opts = conv_to_append_opts(IR), - {ReqID, {low_append_chunk, NSVersion, NS, NSLocator, EpochID, + %% NOTE: The tuple position of NSLocator is a bit odd, because EpochID + %% _must_ be in the 4th position (as NSV & NS must be in 2nd & 3rd). + {ReqID, {low_append_chunk, NSVersion, NS, EpochID, NSLocator, Prefix, Chunk, CSum_tag, CSum, Opts}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, @@ -112,34 +114,32 @@ from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{ req_id=ReqID, checksum_list=#mpb_ll_checksumlistreq{ - epoch_id=PB_EpochID, file=File}}) -> - EpochID = conv_to_epoch_id(PB_EpochID), - {ReqID, {low_checksum_list, EpochID, File}}; + {ReqID, {low_skip_wedge, {low_checksum_list, File}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, list_files=#mpb_ll_listfilesreq{ epoch_id=PB_EpochID}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - {ReqID, {low_list_files, EpochID}}; + {ReqID, {low_skip_wedge, {low_list_files, EpochID}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, wedge_status=#mpb_ll_wedgestatusreq{}}) -> - {ReqID, {low_wedge_status, undefined}}; + {ReqID, {low_skip_wedge, {low_wedge_status}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, delete_migration=#mpb_ll_deletemigrationreq{ epoch_id=PB_EpochID, file=File}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - {ReqID, {low_delete_migration, EpochID, File}}; + {ReqID, {low_skip_wedge, {low_delete_migration, EpochID, File}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, trunc_hack=#mpb_ll_trunchackreq{ epoch_id=PB_EpochID, file=File}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - {ReqID, {low_trunc_hack, EpochID, File}}; + {ReqID, {low_skip_wedge, {low_trunc_hack, EpochID, File}}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, proj_gl=#mpb_ll_getlatestepochidreq{type=ProjType}}) -> @@ -390,14 +390,16 @@ from_pb_response(#mpb_ll_response{ %% TODO: move the #mbp_* record making code from %% machi_pb_high_client:do_send_sync() clauses into to_pb_request(). -to_pb_request(ReqID, {low_echo, _BogusEpochID, Msg}) -> +to_pb_request(ReqID, {low_skip_wedge, {low_echo, Msg}}) -> #mpb_ll_request{ req_id=ReqID, do_not_alter=2, echo=#mpb_echoreq{message=Msg}}; -to_pb_request(ReqID, {low_auth, _BogusEpochID, User, Pass}) -> +to_pb_request(ReqID, {low_skip_wedge, {low_auth, User, Pass}}) -> #mpb_ll_request{req_id=ReqID, do_not_alter=2, auth=#mpb_authreq{user=User, password=Pass}}; -to_pb_request(ReqID, {low_append_chunk, NSVersion, NS, NSLocator, EpochID, +%% NOTE: The tuple position of NSLocator is a bit odd, because EpochID +%% _must_ be in the 4th position (as NSV & NS must be in 2nd & 3rd). +to_pb_request(ReqID, {low_append_chunk, NSVersion, NS, EpochID, NSLocator, Prefix, Chunk, CSum_tag, CSum, Opts}) -> PB_EpochID = conv_from_epoch_id(EpochID), CSum_type = conv_from_csum_tag(CSum_tag), @@ -457,26 +459,24 @@ to_pb_request(ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size offset=Offset, size=Size, trigger_gc=TriggerGC}}; -to_pb_request(ReqID, {low_checksum_list, EpochID, File}) -> - PB_EpochID = conv_from_epoch_id(EpochID), +to_pb_request(ReqID, {low_skip_wedge, {low_checksum_list, File}}) -> #mpb_ll_request{req_id=ReqID, do_not_alter=2, checksum_list=#mpb_ll_checksumlistreq{ - epoch_id=PB_EpochID, file=File}}; -to_pb_request(ReqID, {low_list_files, EpochID}) -> +to_pb_request(ReqID, {low_skip_wedge, {low_list_files, EpochID}}) -> PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_request{req_id=ReqID, do_not_alter=2, list_files=#mpb_ll_listfilesreq{epoch_id=PB_EpochID}}; -to_pb_request(ReqID, {low_wedge_status, _BogusEpochID}) -> +to_pb_request(ReqID, {low_skip_wedge, {low_wedge_status}}) -> #mpb_ll_request{req_id=ReqID, do_not_alter=2, wedge_status=#mpb_ll_wedgestatusreq{}}; -to_pb_request(ReqID, {low_delete_migration, EpochID, File}) -> +to_pb_request(ReqID, {low_skip_wedge, {low_delete_migration, EpochID, File}}) -> PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_request{req_id=ReqID, do_not_alter=2, delete_migration=#mpb_ll_deletemigrationreq{ epoch_id=PB_EpochID, file=File}}; -to_pb_request(ReqID, {low_trunc_hack, EpochID, File}) -> +to_pb_request(ReqID, {low_skip_wedge, {low_trunc_hack, EpochID, File}}) -> PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_request{req_id=ReqID, do_not_alter=2, trunc_hack=#mpb_ll_trunchackreq{ @@ -512,15 +512,15 @@ to_pb_response(_ReqID, _, async_no_response=X) -> X; to_pb_response(ReqID, _, {low_error, ErrCode, ErrMsg}) -> make_ll_error_resp(ReqID, ErrCode, ErrMsg); -to_pb_response(ReqID, {low_echo, _BogusEpochID, _Msg}, Resp) -> +to_pb_response(ReqID, {low_skip_wedge, {low_echo, _Msg}}, Resp) -> #mpb_ll_response{ req_id=ReqID, echo=#mpb_echoresp{message=Resp}}; -to_pb_response(ReqID, {low_auth, _, _, _}, __TODO_Resp) -> +to_pb_response(ReqID, {low_skip_wedige, {low_auth, _, _}}, __TODO_Resp) -> #mpb_ll_response{req_id=ReqID, generic=#mpb_errorresp{code=1, msg="AUTH not implemented"}}; -to_pb_response(ReqID, {low_append_chunk, _NSV, _NS, _NSL, _EID, _Pfx, _Ch, _CST, _CS, _O}, Resp)-> +to_pb_response(ReqID, {low_append_chunk, _NSV, _NS, _EID, _NSL, _Pfx, _Ch, _CST, _CS, _O}, Resp)-> case Resp of {ok, {Offset, Size, File}} -> Where = #mpb_chunkpos{offset=Offset, @@ -579,7 +579,7 @@ to_pb_response(ReqID, {low_trim_chunk, _, _, _, _, _, _, _}, Resp) -> _Else -> make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -to_pb_response(ReqID, {low_checksum_list, _EpochID, _File}, Resp) -> +to_pb_response(ReqID, {low_skip_wedge, {low_checksum_list, _File}}, Resp) -> case Resp of {ok, Chunk} -> #mpb_ll_response{req_id=ReqID, @@ -592,7 +592,7 @@ to_pb_response(ReqID, {low_checksum_list, _EpochID, _File}, Resp) -> _Else -> make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -to_pb_response(ReqID, {low_list_files, _EpochID}, Resp) -> +to_pb_response(ReqID, {low_skip_wedge, {low_list_files, _EpochID}}, Resp) -> case Resp of {ok, FileInfo} -> PB_Files = [#mpb_fileinfo{file_size=Size, file_name=Name} || @@ -607,7 +607,7 @@ to_pb_response(ReqID, {low_list_files, _EpochID}, Resp) -> _Else -> make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -to_pb_response(ReqID, {low_wedge_status, _BogusEpochID}, Resp) -> +to_pb_response(ReqID, {low_skip_wedge, {low_wedge_status}}, Resp) -> case Resp of {error, _}=Error -> Status = conv_from_status(Error), @@ -622,11 +622,11 @@ to_pb_response(ReqID, {low_wedge_status, _BogusEpochID}, Resp) -> epoch_id=PB_EpochID, wedged_flag=PB_Wedged}} end; -to_pb_response(ReqID, {low_delete_migration, _EID, _Fl}, Resp)-> +to_pb_response(ReqID, {low_skip_wedge, {low_delete_migration, _EID, _Fl}}, Resp)-> Status = conv_from_status(Resp), #mpb_ll_response{req_id=ReqID, delete_migration=#mpb_ll_deletemigrationresp{status=Status}}; -to_pb_response(ReqID, {low_trunc_hack, _EID, _Fl}, Resp)-> +to_pb_response(ReqID, {low_skip_wedge, {low_trunc_hack, _EID, _Fl}}, Resp)-> Status = conv_from_status(Resp), #mpb_ll_response{req_id=ReqID, trunc_hack=#mpb_ll_trunchackresp{status=Status}}; diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index 99c277a..62242e4 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -59,7 +59,7 @@ %% File API append_chunk/6, append_chunk/8, read_chunk/7, read_chunk/8, - checksum_list/3, checksum_list/4, + checksum_list/2, checksum_list/3, list_files/2, list_files/3, wedge_status/1, wedge_status/2, @@ -129,13 +129,13 @@ read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts, Timeout) -> %% @doc Fetch the list of chunk checksums for `File'. -checksum_list(PidSpec, EpochID, File) -> - checksum_list(PidSpec, EpochID, File, infinity). +checksum_list(PidSpec, File) -> + checksum_list(PidSpec, File, infinity). %% @doc Fetch the list of chunk checksums for `File'. -checksum_list(PidSpec, EpochID, File, Timeout) -> - gen_server:call(PidSpec, {req, {checksum_list, EpochID, File}}, +checksum_list(PidSpec, File, Timeout) -> + gen_server:call(PidSpec, {req, {checksum_list, File}}, Timeout). %% @doc Fetch the list of all files on the remote FLU. @@ -388,9 +388,9 @@ make_req_fun({write_chunk, NSInfo, EpochID, File, Offset, Chunk}, make_req_fun({trim_chunk, NSInfo, EpochID, File, Offset, Size}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:trim_chunk(Sock, NSInfo, EpochID, File, Offset, Size) end; -make_req_fun({checksum_list, EpochID, File}, +make_req_fun({checksum_list, File}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> - fun() -> Mod:checksum_list(Sock, EpochID, File) end; + fun() -> Mod:checksum_list(Sock, File) end; make_req_fun({list_files, EpochID}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:list_files(Sock, EpochID) end; diff --git a/src/machi_yessir_client.erl b/src/machi_yessir_client.erl index b26298a..8721824 100644 --- a/src/machi_yessir_client.erl +++ b/src/machi_yessir_client.erl @@ -32,7 +32,7 @@ append_chunk/4, append_chunk/5, append_chunk_extra/5, append_chunk_extra/6, read_chunk/5, read_chunk/6, - checksum_list/3, checksum_list/4, + checksum_list/2, checksum_list/3, list_files/2, list_files/3, wedge_status/1, wedge_status/2, @@ -175,7 +175,7 @@ read_chunk(_Host, _TcpPort, EpochID, File, Offset, Size) %% @doc Fetch the list of chunk checksums for `File'. -checksum_list(#yessir{name=Name,chunk_size=ChunkSize}, _EpochID, File) -> +checksum_list(#yessir{name=Name,chunk_size=ChunkSize}, File) -> case get({Name,offset,File}) of undefined -> {error, no_such_file}; @@ -189,10 +189,10 @@ checksum_list(#yessir{name=Name,chunk_size=ChunkSize}, _EpochID, File) -> %% @doc Fetch the list of chunk checksums for `File'. -checksum_list(_Host, _TcpPort, EpochID, File) -> +checksum_list(_Host, _TcpPort, File) -> Sock = connect(#p_srvr{proto_mod=?MODULE}), try - checksum_list(Sock, EpochID, File) + checksum_list(Sock, File) after disconnect(Sock) end. diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index 00b66e9..8647901 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -100,11 +100,8 @@ flu_smoke_test() -> try Msg = "Hello, world!", Msg = ?FLU_C:echo(Host, TcpPort, Msg), - {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, - ?DUMMY_PV1_EPOCH, - "does-not-exist"), - {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, - ?DUMMY_PV1_EPOCH, BadFile), + {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort,"does-not-exist"), + {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, BadFile), {ok, []} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), {ok, {false, _}} = ?FLU_C:wedge_status(Host, TcpPort), @@ -116,8 +113,7 @@ flu_smoke_test() -> {ok, {[{_, Off1, Chunk1, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File1, Off1, Len1, []), - {ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort, - ?DUMMY_PV1_EPOCH, File1), + {ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort, File1), true = is_binary(KludgeBin), {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, @@ -278,8 +274,7 @@ witness_test() -> File = <<"foofile">>, {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, EpochID1, File, 9999, 9999, []), - {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, EpochID1, - File), + {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, File), {error, bad_arg} = ?FLU_C:list_files(Host, TcpPort, EpochID1), {ok, {false, EpochID1}} = ?FLU_C:wedge_status(Host, TcpPort), {ok, _} = ?FLU_C:get_latest_epochid(Host, TcpPort, public), diff --git a/test/machi_flu_psup_test.erl b/test/machi_flu_psup_test.erl index 01ec26c..ff240de 100644 --- a/test/machi_flu_psup_test.erl +++ b/test/machi_flu_psup_test.erl @@ -150,8 +150,8 @@ partial_stop_restart2() -> {error, wedged} = machi_flu1_client:read_chunk( Addr_a, TcpPort_a, NSInfo, ?DUMMY_PV1_EPOCH, <<>>, 99999999, 1, []), - {error, wedged} = machi_flu1_client:checksum_list( - Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH, <<>>), + {error, bad_arg} = machi_flu1_client:checksum_list( + Addr_a, TcpPort_a, <<>>), %% list_files() is permitted despite wedged status {ok, _} = machi_flu1_client:list_files( Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH), diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index 1f32ab6..9157fd8 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -87,7 +87,7 @@ io:format(user, "\nTODO: fix write_chunk() call below @ ~s LINE ~w\n", [?MODULE, %% Alright, now for the rest of the API, whee BadFile = <<"no-such-file">>, - {error, bad_arg} = ?MUT:checksum_list(Prox1, FakeEpoch, BadFile), + {error, bad_arg} = ?MUT:checksum_list(Prox1, BadFile), {ok, [_|_]} = ?MUT:list_files(Prox1, FakeEpoch), {ok, {false, _}} = ?MUT:wedge_status(Prox1), {ok, {0, _SomeCSum}} = ?MUT:get_latest_epochid(Prox1, public), @@ -262,11 +262,11 @@ flu_restart_test2() -> File1, Off1, Size1, []) end, fun(run) -> {ok, KludgeBin} = - ?MUT:checksum_list(Prox1, FakeEpoch, File1), + ?MUT:checksum_list(Prox1, File1), true = is_binary(KludgeBin), ok; (line) -> io:format("line ~p, ", [?LINE]); - (stop) -> ?MUT:checksum_list(Prox1, FakeEpoch, File1) + (stop) -> ?MUT:checksum_list(Prox1, File1) end, fun(run) -> {ok, _} = ?MUT:list_files(Prox1, FakeEpoch), -- 2.45.2 From 76ae4247cd1f16c9d2345c90f47075f5e63484f5 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 29 Dec 2015 18:02:56 +0900 Subject: [PATCH 08/53] Fix cut-and-paste-o --- src/machi_pb_translate.erl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 54690b1..3e9821d 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -575,7 +575,7 @@ to_pb_response(ReqID, {low_trim_chunk, _, _, _, _, _, _, _}, Resp) -> {error, _}=Error -> Status = conv_from_status(Error), #mpb_ll_response{req_id=ReqID, - read_chunk=#mpb_ll_trimchunkresp{status=Status}}; + trim_chunk=#mpb_ll_trimchunkresp{status=Status}}; _Else -> make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -- 2.45.2 From 3c6f1be5d053bc5cd7a6793c75e15d4e34671337 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 29 Dec 2015 18:47:08 +0900 Subject: [PATCH 09/53] Change read_chunk options to use new #read_opts{} --- include/machi.hrl | 6 ++++++ src/machi_admin_util.erl | 2 +- src/machi_basho_bench_driver.erl | 2 +- src/machi_chain_repair.erl | 2 +- src/machi_cr_client.erl | 4 ++-- src/machi_dt.erl | 6 +++++- src/machi_file_proxy.erl | 16 +++++++-------- src/machi_flu1_client.erl | 11 ++++++----- src/machi_pb_high_client.erl | 21 ++++++++++---------- src/machi_pb_translate.erl | 21 ++++++++++---------- src/machi_proxy_flu1_client.erl | 2 +- src/machi_util.erl | 8 +++++++- test/machi_ap_repair_eqc.erl | 2 +- test/machi_cr_client_test.erl | 28 +++++++++++++-------------- test/machi_file_proxy_test.erl | 4 ++-- test/machi_flu1_test.erl | 16 ++++++++------- test/machi_flu_psup_test.erl | 2 +- test/machi_pb_high_client_test.erl | 6 +++--- test/machi_proxy_flu1_client_test.erl | 8 ++++---- 19 files changed, 93 insertions(+), 74 deletions(-) diff --git a/include/machi.hrl b/include/machi.hrl index 6b35205..51eda4f 100644 --- a/include/machi.hrl +++ b/include/machi.hrl @@ -55,3 +55,9 @@ preferred_file_name :: 'undefined' | machi_dt:file_name_s(), flag_fail_preferred = false :: boolean() }). + +-record(read_opts, { + no_checksum = false :: boolean(), + no_chunk = false :: boolean(), + needs_trimmed = false :: boolean() + }). diff --git a/src/machi_admin_util.erl b/src/machi_admin_util.erl index 19264cb..35eeb7b 100644 --- a/src/machi_admin_util.erl +++ b/src/machi_admin_util.erl @@ -94,7 +94,7 @@ verify_file_checksums_remote2(Sock1, EpochID, File) -> io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), ReadChunk = fun(File_name, Offset, Size) -> ?FLU_C:read_chunk(Sock1, NSInfo, EpochID, - File_name, Offset, Size, []) + File_name, Offset, Size, undefined) end, verify_file_checksums_common(Sock1, EpochID, File, ReadChunk). diff --git a/src/machi_basho_bench_driver.erl b/src/machi_basho_bench_driver.erl index 4d36328..4adc052 100644 --- a/src/machi_basho_bench_driver.erl +++ b/src/machi_basho_bench_driver.erl @@ -112,7 +112,7 @@ run(read, KeyGen, _ValueGen, #m{conn=Conn, max_key=MaxKey}=S) -> Idx = KeyGen() rem MaxKey, %% {File, Offset, Size, _CSum} = ets:lookup_element(?ETS_TAB, Idx, 2), {File, Offset, Size} = ets:lookup_element(?ETS_TAB, Idx, 2), - case machi_cr_client:read_chunk(Conn, File, Offset, Size, [], ?THE_TIMEOUT) of + case machi_cr_client:read_chunk(Conn, File, Offset, Size, undefined, ?THE_TIMEOUT) of {ok, _Chunk} -> {ok, S}; {error, _}=Err -> diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index 040c1da..cb34da1 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -336,7 +336,7 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), {ok, {[{_, Offset, Chunk, _}], _}} = machi_proxy_flu1_client:read_chunk( - SrcP, NSInfo, EpochID, File, Offset, Size, [], + SrcP, NSInfo, EpochID, File, Offset, Size, undefined, ?SHORT_TIMEOUT), _T2 = os:timestamp(), <<_Tag:1/binary, CSum/binary>> = TaggedCSum, diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 173ed77..9edf30f 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -746,7 +746,7 @@ read_repair2(cp_mode=ConsistencyMode, %% TODO WTF was I thinking here??.... Tail = lists:last(readonly_flus(P)), case ?FLU_PC:read_chunk(orddict:fetch(Tail, PD), NSInfo, EpochID, - File, Offset, Size, [], ?TIMEOUT) of + File, Offset, Size, undefined, ?TIMEOUT) of {ok, Chunks} when is_list(Chunks) -> %% TODO: change to {Chunks, Trimmed} and have them repaired ToRepair = mutation_flus(P) -- [Tail], @@ -1039,7 +1039,7 @@ try_to_find_chunk(Eligible, NSInfo, File, Offset, Size, Proxy = orddict:fetch(FLU, PD), case ?FLU_PC:read_chunk(Proxy, NSInfo, EpochID, %% TODO Trimmed is required here - File, Offset, Size, []) of + File, Offset, Size, undefined) of {ok, {_Chunks, _} = ChunksAndTrimmed} -> {FLU, {ok, ChunksAndTrimmed}}; Else -> diff --git a/src/machi_dt.erl b/src/machi_dt.erl index 8becb14..fc38eaf 100644 --- a/src/machi_dt.erl +++ b/src/machi_dt.erl @@ -49,6 +49,8 @@ -type ns_info() :: #ns_info{}. -type projection() :: #projection_v1{}. -type projection_type() :: 'public' | 'private'. +-type read_opts() :: #read_opts{}. +-type read_opts_x() :: 'undefined' | 'noopt' | 'none' | #read_opts{}. %% Tags that stand for how that checksum was generated. See %% machi_util:make_tagged_csum/{1,2} for further documentation and @@ -82,6 +84,8 @@ namespace_version/0, ns_info/0, projection/0, - projection_type/0 + projection_type/0, + read_opts/0, + read_opts_x/0 ]). diff --git a/src/machi_file_proxy.erl b/src/machi_file_proxy.erl index cae292c..bc9a539 100644 --- a/src/machi_file_proxy.erl +++ b/src/machi_file_proxy.erl @@ -141,18 +141,18 @@ sync(_Pid, Type) -> Data :: binary(), Checksum :: binary()}]} | {error, Reason :: term()}. read(Pid, Offset, Length) -> - read(Pid, Offset, Length, []). + read(Pid, Offset, Length, #read_opts{}). -spec read(Pid :: pid(), Offset :: non_neg_integer(), Length :: non_neg_integer(), - [{no_checksum|no_chunk|needs_trimmed, boolean()}]) -> + machi_dt:read_opts_x()) -> {ok, [{Filename::string(), Offset :: non_neg_integer(), Data :: binary(), Checksum :: binary()}]} | {error, Reason :: term()}. -read(Pid, Offset, Length, Opts) when is_pid(Pid) andalso is_integer(Offset) andalso Offset >= 0 - andalso is_integer(Length) andalso Length > 0 - andalso is_list(Opts) -> +read(Pid, Offset, Length, #read_opts{}=Opts) + when is_pid(Pid) andalso is_integer(Offset) andalso Offset >= 0 + andalso is_integer(Length) andalso Length > 0 -> gen_server:call(Pid, {read, Offset, Length, Opts}, ?TIMEOUT); read(_Pid, Offset, Length, Opts) -> lager:warning("Bad args to read: Offset ~p, Length ~p, Options ~p", [Offset, Length, Opts]), @@ -298,15 +298,15 @@ handle_call({read, Offset, Length, Opts}, _From, }) -> %% TODO: use these options - NoChunk prevents reading from disks %% NoChecksum doesn't check checksums - NoChecksum = proplists:get_value(no_checksum, Opts, false), - NoChunk = proplists:get_value(no_chunk, Opts, false), + #read_opts{no_checksum=NoChecksum, no_chunk=NoChunk, + needs_trimmed=NeedsTrimmed} = Opts, {Resp, NewErr} = case do_read(FH, F, CsumTable, Offset, Length, NoChunk, NoChecksum) of {ok, {[], []}} -> {{error, not_written}, Err + 1}; {ok, {Chunks0, Trimmed0}} -> Chunks = slice_both_side(Chunks0, Offset, Offset+Length), - Trimmed = case proplists:get_value(needs_trimmed, Opts, false) of + Trimmed = case NeedsTrimmed of true -> Trimmed0; false -> [] end, diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index 74a6323..3c28ce7 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -150,28 +150,29 @@ append_chunk(Host, TcpPort, NSInfo0, EpochID, %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -spec read_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), - proplists:proplist()) -> + machi_dt:read_opts_x()) -> {ok, machi_dt:chunk_s()} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, term()}. -read_chunk(Sock, NSInfo0, EpochID, File, Offset, Size, Opts) +read_chunk(Sock, NSInfo0, EpochID, File, Offset, Size, Opts0) when Offset >= ?MINIMUM_OFFSET, Size >= 0 -> NSInfo = machi_util:ns_info_default(NSInfo0), + Opts = machi_util:read_opts_default(Opts0), read_chunk2(Sock, NSInfo, EpochID, File, Offset, Size, Opts). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -spec read_chunk(machi_dt:inet_host(), machi_dt:inet_port(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), - proplists:proplist()) -> + machi_dt:read_opts_x()) -> {ok, machi_dt:chunk_s()} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, term()}. -read_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Size, Opts) +read_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Size, Opts0) when Offset >= ?MINIMUM_OFFSET, Size >= 0 -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), NSInfo = machi_util:ns_info_default(NSInfo0), -io:format(user, "dbgyo ~s LINE ~p NSInfo0 ~p NSInfo ~p\n", [?MODULE, ?LINE, NSInfo0, NSInfo]), timer:sleep(333), + Opts = machi_util:read_opts_default(Opts0), try read_chunk2(Sock, NSInfo, EpochID, File, Offset, Size, Opts) after diff --git a/src/machi_pb_high_client.erl b/src/machi_pb_high_client.erl index 9c69358..37c513e 100644 --- a/src/machi_pb_high_client.erl +++ b/src/machi_pb_high_client.erl @@ -131,21 +131,22 @@ write_chunk(PidSpec, File, Offset, Chunk, CSum, Timeout) -> %% {Chunks, TrimmedChunks}}' for live file while it returns `{error, %% trimmed}' if all bytes of the file was trimmed. -spec read_chunk(pid(), File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), - [{flag_no_checksum | flag_no_chunk | needs_trimmed, boolean()}]) -> + machi_dt:read_opts_x()) -> {ok, {Chunks::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), binary()}], Trimmed::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size()}]}} | {error, machi_client_error_reason()}. -read_chunk(PidSpec, File, Offset, Size, Options) -> - read_chunk(PidSpec, File, Offset, Size, Options, ?DEFAULT_TIMEOUT). +read_chunk(PidSpec, File, Offset, Size, Opts) -> + read_chunk(PidSpec, File, Offset, Size, Opts, ?DEFAULT_TIMEOUT). -spec read_chunk(pid(), File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), - [{flag_no_checksum | flag_no_chunk | needs_trimmed, boolean()}], + machi_dt:read_opts_x(), Timeout::non_neg_integer()) -> {ok, {Chunks::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), binary()}], Trimmed::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size()}]}} | {error, machi_client_error_reason()}. -read_chunk(PidSpec, File, Offset, Size, Options, Timeout) -> - send_sync(PidSpec, {read_chunk, File, Offset, Size, Options}, Timeout). +read_chunk(PidSpec, File, Offset, Size, Opts0, Timeout) -> + Opts = machi_util:read_opts_default(Opts0), + send_sync(PidSpec, {read_chunk, File, Offset, Size, Opts}, Timeout). %% @doc Trims arbitrary binary range of any file. If a specified range %% has any byte trimmed, it fails and returns `{error, trimmed}'. @@ -341,13 +342,13 @@ do_send_sync2({write_chunk, File, Offset, Chunk, CSum}, Res = {bummer, {X, Y, erlang:get_stacktrace()}}, {Res, S#state{count=Count+1}} end; -do_send_sync2({read_chunk, File, Offset, Size, Options}, +do_send_sync2({read_chunk, File, Offset, Size, Opts}, #state{sock=Sock, sock_id=Index, count=Count}=S) -> try ReqID = <>, - FlagNoChecksum = proplists:get_value(no_checksum, Options, false), - FlagNoChunk = proplists:get_value(no_chunk, Options, false), - NeedsTrimmed = proplists:get_value(needs_trimmed, Options, false), + #read_opts{no_checksum=FlagNoChecksum, + no_chunk=FlagNoChunk, + needs_trimmed=NeedsTrimmed} = Opts, Req = #mpb_readchunkreq{chunk_pos=#mpb_chunkpos{file_name=File, offset=Offset, chunk_size=Size}, diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 3e9821d..0ea1281 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -91,9 +91,9 @@ from_pb_request(#mpb_ll_request{ flag_no_chunk=PB_GetNoChunk, flag_needs_trimmed=PB_NeedsTrimmed}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - Opts = [{no_checksum, conv_to_boolean(PB_GetNoChecksum)}, - {no_chunk, conv_to_boolean(PB_GetNoChunk)}, - {needs_trimmed, conv_to_boolean(PB_NeedsTrimmed)}], + Opts = #read_opts{no_checksum=conv_to_boolean(PB_GetNoChecksum), + no_chunk=conv_to_boolean(PB_GetNoChunk), + needs_trimmed=conv_to_boolean(PB_NeedsTrimmed)}, #mpb_chunkpos{file_name=File, offset=Offset, chunk_size=Size} = ChunkPos, @@ -203,11 +203,10 @@ from_pb_request(#mpb_request{req_id=ReqID, flag_no_checksum=FlagNoChecksum, flag_no_chunk=FlagNoChunk, flag_needs_trimmed=NeedsTrimmed} = IR, - %% I want MAPS - Options = [{no_checksum, machi_util:int2bool(FlagNoChecksum)}, - {no_chunk, machi_util:int2bool(FlagNoChunk)}, - {needs_trimmed, machi_util:int2bool(NeedsTrimmed)}], - {ReqID, {high_read_chunk, File, Offset, Size, Options}}; + Opts = #read_opts{no_checksum=machi_util:int2bool(FlagNoChecksum), + no_chunk=machi_util:int2bool(FlagNoChunk), + needs_trimmed=machi_util:int2bool(NeedsTrimmed)}, + {ReqID, {high_read_chunk, File, Offset, Size, Opts}}; from_pb_request(#mpb_request{req_id=ReqID, trim_chunk=IR=#mpb_trimchunkreq{}}) -> #mpb_trimchunkreq{chunk_pos=#mpb_chunkpos{file_name=File, @@ -432,9 +431,9 @@ to_pb_request(ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chu csum=PB_CSum}}}; to_pb_request(ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}) -> PB_EpochID = conv_from_epoch_id(EpochID), - FNChecksum = proplists:get_value(no_checksum, Opts, false), - FNChunk = proplists:get_value(no_chunk, Opts, false), - NeedsTrimmed = proplists:get_value(needs_trimmed, Opts, false), + #read_opts{no_checksum=FNChecksum, + no_chunk=FNChunk, + needs_trimmed=NeedsTrimmed} = Opts, #mpb_ll_request{ req_id=ReqID, do_not_alter=2, read_chunk=#mpb_ll_readchunkreq{ diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index 62242e4..2bb5b20 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -289,7 +289,7 @@ write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) -> Size = byte_size(Chunk), NSInfo = undefined, io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), - case read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, [], Timeout) of + case read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, undefined, Timeout) of {ok, {[{File, Offset, Chunk2, _}], []}} when Chunk2 == Chunk -> %% See equivalent comment inside write_projection(). ok; diff --git a/src/machi_util.erl b/src/machi_util.erl index d4704d9..8173898 100644 --- a/src/machi_util.erl +++ b/src/machi_util.erl @@ -50,6 +50,7 @@ wait_for_death/2, wait_for_life/2, bool2int/1, int2bool/1, + read_opts_default/1, ns_info_default/1 ]). @@ -445,9 +446,14 @@ bool2int(false) -> 0. int2bool(0) -> false; int2bool(I) when is_integer(I) -> true. +read_opts_default(#read_opts{}=NSInfo) -> + NSInfo; +read_opts_default(A) when is_atom(A) -> + #read_opts{}. + ns_info_default(#ns_info{}=NSInfo) -> NSInfo; -ns_info_default(undefined) -> +ns_info_default(A) when is_atom(A) -> #ns_info{}. diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 6037ec6..84e37ad 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -457,7 +457,7 @@ assert_chunk(C, {Off, Len, FileName}=Key, Bin) -> %% TODO : Use CSum instead of binary (after disuccsion about CSum is calmed down?) NSInfo = undefined, io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), - case (catch machi_cr_client:read_chunk(C, NSInfo, FileName, Off, Len, [], sec(3))) of + case (catch machi_cr_client:read_chunk(C, NSInfo, FileName, Off, Len, undefined, sec(3))) of {ok, {[{FileNameStr, Off, Bin, _}], []}} -> ok; {ok, Got} -> diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index 440daab..4228913 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -123,13 +123,13 @@ smoke_test2() -> {error, bad_checksum} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, undefined), {ok, PPP} = machi_flu1_client:read_latest_projection(Host, PortBase+0, private), %% Verify that the client's CR wrote to all of them. [{ok, {[{_, Off1, Chunk1, _}], []}} = machi_flu1_client:read_chunk( - Host, PortBase+X, NSInfo, EpochID, File1, Off1, Size1, []) || + Host, PortBase+X, NSInfo, EpochID, File1, Off1, Size1, undefined) || X <- [0,1,2] ], %% Test read repair: Manually write to head, then verify that @@ -137,18 +137,18 @@ smoke_test2() -> FooOff1 = Off1 + (1024*1024), [{error, not_written} = machi_flu1_client:read_chunk( Host, PortBase+X, NSInfo, EpochID, - File1, FooOff1, Size1, []) || X <- [0,1,2] ], + File1, FooOff1, Size1, undefined) || X <- [0,1,2] ], ok = machi_flu1_client:write_chunk(Host, PortBase+0, NSInfo, EpochID, File1, FooOff1, Chunk1), {ok, {[{_, FooOff1, Chunk1, _}], []}} = machi_flu1_client:read_chunk(Host, PortBase+0, NSInfo, EpochID, - File1, FooOff1, Size1, []), + File1, FooOff1, Size1, undefined), {ok, {[{_, FooOff1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff1, Size1, undefined), [?assertMatch({X,{ok, {[{_, FooOff1, Chunk1, _}], []}}}, {X,machi_flu1_client:read_chunk( Host, PortBase+X, NSInfo, EpochID, - File1, FooOff1, Size1, [])}) + File1, FooOff1, Size1, undefined)}) || X <- [0,1,2] ], %% Test read repair: Manually write to middle, then same checking. @@ -158,18 +158,18 @@ smoke_test2() -> ok = machi_flu1_client:write_chunk(Host, PortBase+1, NSInfo, EpochID, File1, FooOff2, Chunk2), {ok, {[{_, FooOff2, Chunk2, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, undefined), [{X,{ok, {[{_, FooOff2, Chunk2, _}], []}}} = {X,machi_flu1_client:read_chunk( Host, PortBase+X, NSInfo, EpochID, - File1, FooOff2, Size2, [])} || X <- [0,1,2] ], + File1, FooOff2, Size2, undefined)} || X <- [0,1,2] ], %% Misc API smoke & minor regression checks {error, bad_arg} = machi_cr_client:read_chunk(C1, NSInfo, <<"no">>, - 999999999, 1, []), + 999999999, 1, undefined), {ok, {[{_,Off1,Chunk1,_}, {_,FooOff1,Chunk1,_}, {_,FooOff2,Chunk2,_}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, 88888888, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, 88888888, undefined), %% Checksum list return value is a primitive binary(). {ok, KludgeBin} = machi_cr_client:checksum_list(C1, File1), true = is_binary(KludgeBin), @@ -189,7 +189,7 @@ smoke_test2() -> machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, NoCSum, Opts1), {ok, {[{_, Off10, Chunk10, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File10, Off10, Size10, []), + machi_cr_client:read_chunk(C1, NSInfo, File10, Off10, Size10, undefined), [begin Offx = Off10 + (Seq * Size10), %% TODO: uncomment written/not_written enforcement is available. @@ -198,7 +198,7 @@ smoke_test2() -> {ok, {Offx,Size10,File10}} = machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10), {ok, {[{_, Offx, Chunk10, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File10, Offx, Size10, []) + machi_cr_client:read_chunk(C1, NSInfo, File10, Offx, Size10, undefined) end || Seq <- lists:seq(1, Extra10)], {ok, {Off11,Size11,File11}} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, NoCSum), @@ -246,7 +246,7 @@ witness_smoke_test2() -> {error, bad_checksum} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, undefined), %% Stop 'b' and let the chain reset. ok = machi_flu_psup:stop_flu_package(b), @@ -273,7 +273,7 @@ witness_smoke_test2() -> %% Chunk1 is still readable: not affected by wedged witness head. {ok, {[{_, Off1, Chunk1, _}], []}} = - machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, []), + machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, undefined), %% But because the head is wedged, an append will fail. {error, partition} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum, diff --git a/test/machi_file_proxy_test.erl b/test/machi_file_proxy_test.erl index a04d880..605abe7 100644 --- a/test/machi_file_proxy_test.erl +++ b/test/machi_file_proxy_test.erl @@ -119,7 +119,7 @@ multiple_chunks_read_test_() -> ?_assertEqual(ok, machi_file_proxy:trim(Pid, 0, 1, false)), ?_assertMatch({ok, {[], [{"test", 0, 1}]}}, machi_file_proxy:read(Pid, 0, 1, - [{needs_trimmed, true}])), + #read_opts{needs_trimmed=true})), ?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, random_binary(0, 1024))), ?_assertEqual(ok, machi_file_proxy:write(Pid, 10000, <<"fail">>)), ?_assertEqual(ok, machi_file_proxy:write(Pid, 20000, <<"fail">>)), @@ -134,7 +134,7 @@ multiple_chunks_read_test_() -> machi_file_proxy:read(Pid, 1024, 530000)), ?_assertMatch({ok, {[{"test", 1, _, _}], [{"test", 0, 1}]}}, machi_file_proxy:read(Pid, 0, 1024, - [{needs_trimmed, true}])) + #read_opts{needs_trimmed=true})) ] end}. diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index 8647901..556f63e 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -112,7 +112,8 @@ flu_smoke_test() -> Prefix, Chunk1, NoCSum), {ok, {[{_, Off1, Chunk1, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - File1, Off1, Len1, []), + File1, Off1, Len1, + noopt), {ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort, File1), true = is_binary(KludgeBin), {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, @@ -121,8 +122,9 @@ flu_smoke_test() -> {ok, [{_,File1}]} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), Len1 = size(Chunk1), {error, not_written} = ?FLU_C:read_chunk(Host, TcpPort, - NSInfo, ?DUMMY_PV1_EPOCH, - File1, Off1*983829323, Len1, []), + NSInfo, ?DUMMY_PV1_EPOCH, + File1, Off1*983829323, Len1, + noopt), %% XXX FIXME %% %% This is failing because the read extends past the end of the file. @@ -163,13 +165,13 @@ flu_smoke_test() -> {error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, BadFile, Off2, Chunk2), {ok, {[{_, Off2, Chunk2, _}], _}} = - ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, []), + ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, noopt), {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - "no!!", Off2, Len2, []), + "no!!", Off2, Len2, noopt), {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - BadFile, Off2, Len2, []), + BadFile, Off2, Len2, noopt), %% We know that File1 still exists. Pretend that we've done a %% migration and exercise the delete_migration() API. @@ -273,7 +275,7 @@ witness_test() -> Prefix, Chunk1, NoCSum), File = <<"foofile">>, {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, EpochID1, - File, 9999, 9999, []), + File, 9999, 9999, noopt), {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, File), {error, bad_arg} = ?FLU_C:list_files(Host, TcpPort, EpochID1), {ok, {false, EpochID1}} = ?FLU_C:wedge_status(Host, TcpPort), diff --git a/test/machi_flu_psup_test.erl b/test/machi_flu_psup_test.erl index ff240de..5c068af 100644 --- a/test/machi_flu_psup_test.erl +++ b/test/machi_flu_psup_test.erl @@ -149,7 +149,7 @@ partial_stop_restart2() -> {_, #p_srvr{address=Addr_a, port=TcpPort_a}} = hd(Ps), {error, wedged} = machi_flu1_client:read_chunk( Addr_a, TcpPort_a, NSInfo, ?DUMMY_PV1_EPOCH, - <<>>, 99999999, 1, []), + <<>>, 99999999, 1, undefined), {error, bad_arg} = machi_flu1_client:checksum_list( Addr_a, TcpPort_a, <<>>), %% list_files() is permitted despite wedged status diff --git a/test/machi_pb_high_client_test.erl b/test/machi_pb_high_client_test.erl index ba2f10a..468b183 100644 --- a/test/machi_pb_high_client_test.erl +++ b/test/machi_pb_high_client_test.erl @@ -80,7 +80,7 @@ smoke_test2() -> [begin File = binary_to_list(Fl), ?assertMatch({ok, {[{File, Off, Ch, _}], []}}, - ?C:read_chunk(Clnt, Fl, Off, Sz, [])) + ?C:read_chunk(Clnt, Fl, Off, Sz, undefined)) end || {Ch, Fl, Off, Sz} <- Reads], {ok, KludgeBin} = ?C:checksum_list(Clnt, File1), @@ -104,7 +104,7 @@ smoke_test2() -> end || {_Ch, Fl, Off, Sz} <- Reads], [begin {ok, {[], Trimmed}} = - ?C:read_chunk(Clnt, Fl, Off, Sz, [{needs_trimmed, true}]), + ?C:read_chunk(Clnt, Fl, Off, Sz, #read_opts{needs_trimmed=true}), Filename = binary_to_list(Fl), ?assertEqual([{Filename, Off, Sz}], Trimmed) end || {_Ch, Fl, Off, Sz} <- Reads], @@ -130,7 +130,7 @@ smoke_test2() -> [begin {error, trimmed} = - ?C:read_chunk(Clnt, Fl, Off, Sz, []) + ?C:read_chunk(Clnt, Fl, Off, Sz, undefined) end || {_Ch, Fl, Off, Sz} <- Reads], ok after diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index 9157fd8..66c52f4 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -62,14 +62,14 @@ api_smoke_test() -> ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, NoCSum), {ok, {[{_, MyOff, MyChunk, _}], []}} = - ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile, MyOff, MySize, []), + ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile, MyOff, MySize, undefined), MyChunk2 = <<"my chunk data, yeah, again">>, Opts1 = #append_opts{chunk_extra=4242}, {ok, {MyOff2,MySize2,MyFile2}} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk2, NoCSum, Opts1, infinity), {ok, {[{_, MyOff2, MyChunk2, _}], []}} = - ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile2, MyOff2, MySize2, []), + ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile2, MyOff2, MySize2, undefined), BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, {error, bad_checksum} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, BadCSum), @@ -255,11 +255,11 @@ flu_restart_test2() -> end, fun(run) -> {ok, {[{_, Off1, Data, _}], []}} = ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, - File1, Off1, Size1, []), + File1, Off1, Size1, undefined), ok; (line) -> io:format("line ~p, ", [?LINE]); (stop) -> ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, - File1, Off1, Size1, []) + File1, Off1, Size1, undefined) end, fun(run) -> {ok, KludgeBin} = ?MUT:checksum_list(Prox1, File1), -- 2.45.2 From c65424569dff4580f9fbb1c89b36b450912bb6b1 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 29 Dec 2015 19:17:18 +0900 Subject: [PATCH 10/53] Use 'bool' type in PB spec where feasible --- src/machi.proto | 20 +++++++++---------- src/machi_pb_translate.erl | 35 ++++++++++++++------------------- src/machi_proxy_flu1_client.erl | 2 -- test/machi_ap_repair_eqc.erl | 4 +++- 4 files changed, 28 insertions(+), 33 deletions(-) diff --git a/src/machi.proto b/src/machi.proto index 58e6175..7c55136 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -181,7 +181,7 @@ message Mpb_AppendChunkReq { optional uint32 chunk_extra = 20; optional string preferred_file_name = 21; /* Fail the operation if our preferred file name is not available */ - optional uint32 flag_fail_preferred = 22; + optional bool flag_fail_preferred = 22 [default=false]; } message Mpb_AppendChunkResp { @@ -210,15 +210,15 @@ message Mpb_ReadChunkReq { // Use flag_no_checksum=non-zero to skip returning the chunk's checksum. // TODO: not implemented yet. - optional uint32 flag_no_checksum = 20 [default=0]; + optional bool flag_no_checksum = 20 [default=false]; // Use flag_no_chunk=non-zero to skip returning the chunk (which // only makes sense if flag_no_checksum is not set). // TODO: not implemented yet. - optional uint32 flag_no_chunk = 21 [default=0]; + optional bool flag_no_chunk = 21 [default=false]; // TODO: not implemented yet. - optional uint32 flag_needs_trimmed = 22 [default=0]; + optional bool flag_needs_trimmed = 22 [default=false]; } message Mpb_ReadChunkResp { @@ -401,7 +401,7 @@ message Mpb_LL_AppendChunkReq { optional uint32 chunk_extra = 20; optional string preferred_file_name = 21; /* Fail the operation if our preferred file name is not available */ - optional uint32 flag_fail_preferred = 22; + optional bool flag_fail_preferred = 22 [default=false]; } message Mpb_LL_AppendChunkResp { @@ -437,14 +437,14 @@ message Mpb_LL_ReadChunkReq { // Use flag_no_checksum=non-zero to skip returning the chunk's checksum. // TODO: not implemented yet. - optional uint32 flag_no_checksum = 20 [default=0]; + optional bool flag_no_checksum = 20 [default=false]; // Use flag_no_chunk=non-zero to skip returning the chunk (which // only makes sense if flag_checksum is not set). // TODO: not implemented yet. - optional uint32 flag_no_chunk = 21 [default=0]; + optional bool flag_no_chunk = 21 [default=false]; - optional uint32 flag_needs_trimmed = 22 [default=0]; + optional bool flag_needs_trimmed = 22 [default=false]; } message Mpb_LL_ReadChunkResp { @@ -465,7 +465,7 @@ message Mpb_LL_TrimChunkReq { required uint64 offset = 12; required uint32 size = 13; - optional uint32 trigger_gc = 20 [default=0]; + optional bool trigger_gc = 20 [default=false]; } message Mpb_LL_TrimChunkResp { @@ -506,7 +506,7 @@ message Mpb_LL_WedgeStatusReq { message Mpb_LL_WedgeStatusResp { required Mpb_GeneralStatusCode status = 1; optional Mpb_EpochID epoch_id = 2; - optional uint32 wedged_flag = 3; + optional bool wedged_flag = 3; } // Low level API: delete_migration() diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 0ea1281..b1b1ecd 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -91,9 +91,9 @@ from_pb_request(#mpb_ll_request{ flag_no_chunk=PB_GetNoChunk, flag_needs_trimmed=PB_NeedsTrimmed}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - Opts = #read_opts{no_checksum=conv_to_boolean(PB_GetNoChecksum), - no_chunk=conv_to_boolean(PB_GetNoChunk), - needs_trimmed=conv_to_boolean(PB_NeedsTrimmed)}, + Opts = #read_opts{no_checksum=PB_GetNoChecksum, + no_chunk=PB_GetNoChunk, + needs_trimmed=PB_NeedsTrimmed}, #mpb_chunkpos{file_name=File, offset=Offset, chunk_size=Size} = ChunkPos, @@ -107,9 +107,8 @@ from_pb_request(#mpb_ll_request{ file=File, offset=Offset, size=Size, - trigger_gc=PB_TriggerGC}}) -> + trigger_gc=TriggerGC}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - TriggerGC = conv_to_boolean(PB_TriggerGC), {ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}}; from_pb_request(#mpb_ll_request{ req_id=ReqID, @@ -203,9 +202,9 @@ from_pb_request(#mpb_request{req_id=ReqID, flag_no_checksum=FlagNoChecksum, flag_no_chunk=FlagNoChunk, flag_needs_trimmed=NeedsTrimmed} = IR, - Opts = #read_opts{no_checksum=machi_util:int2bool(FlagNoChecksum), - no_chunk=machi_util:int2bool(FlagNoChunk), - needs_trimmed=machi_util:int2bool(NeedsTrimmed)}, + Opts = #read_opts{no_checksum=FlagNoChecksum, + no_chunk=FlagNoChunk, + needs_trimmed=NeedsTrimmed}, {ReqID, {high_read_chunk, File, Offset, Size, Opts}}; from_pb_request(#mpb_request{req_id=ReqID, trim_chunk=IR=#mpb_trimchunkreq{}}) -> @@ -311,11 +310,8 @@ from_pb_response(#mpb_ll_response{ from_pb_response(#mpb_ll_response{ req_id=ReqID, wedge_status=#mpb_ll_wedgestatusresp{ - epoch_id=PB_EpochID, wedged_flag=PB_Wedged}}) -> + epoch_id=PB_EpochID, wedged_flag=Wedged_p}}) -> EpochID = conv_to_epoch_id(PB_EpochID), - Wedged_p = if PB_Wedged == 1 -> true; - PB_Wedged == 0 -> false - end, {ReqID, {ok, {Wedged_p, EpochID}}}; from_pb_response(#mpb_ll_response{ req_id=ReqID, @@ -444,9 +440,9 @@ to_pb_request(ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size file_name=File, offset=Offset, chunk_size=Size}, - flag_no_checksum=machi_util:bool2int(FNChecksum), - flag_no_chunk=machi_util:bool2int(FNChunk), - flag_needs_trimmed=machi_util:bool2int(NeedsTrimmed)}}; + flag_no_checksum=FNChecksum, + flag_no_chunk=FNChunk, + flag_needs_trimmed=NeedsTrimmed}}; to_pb_request(ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}) -> PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_request{req_id=ReqID, do_not_alter=2, @@ -613,13 +609,12 @@ to_pb_response(ReqID, {low_skip_wedge, {low_wedge_status}}, Resp) -> #mpb_ll_response{req_id=ReqID, wedge_status=#mpb_ll_wedgestatusresp{status=Status}}; {Wedged_p, EpochID} -> - PB_Wedged = conv_from_boolean(Wedged_p), PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_response{req_id=ReqID, wedge_status=#mpb_ll_wedgestatusresp{ status='OK', epoch_id=PB_EpochID, - wedged_flag=PB_Wedged}} + wedged_flag=Wedged_p}} end; to_pb_response(ReqID, {low_skip_wedge, {low_delete_migration, _EID, _Fl}}, Resp)-> Status = conv_from_status(Resp), @@ -992,7 +987,7 @@ conv_from_boolean(true) -> conv_from_append_opts(#append_opts{chunk_extra=ChunkExtra, preferred_file_name=Pref, flag_fail_preferred=FailPref}) -> - {ChunkExtra, Pref, conv_from_boolean(FailPref)}. + {ChunkExtra, Pref, FailPref}. conv_to_append_opts(#mpb_appendchunkreq{ @@ -1001,14 +996,14 @@ conv_to_append_opts(#mpb_appendchunkreq{ flag_fail_preferred=FailPref}) -> #append_opts{chunk_extra=ChunkExtra, preferred_file_name=Pref, - flag_fail_preferred=conv_to_boolean(FailPref)}; + flag_fail_preferred=FailPref}; conv_to_append_opts(#mpb_ll_appendchunkreq{ chunk_extra=ChunkExtra, preferred_file_name=Pref, flag_fail_preferred=FailPref}) -> #append_opts{chunk_extra=ChunkExtra, preferred_file_name=Pref, - flag_fail_preferred=conv_to_boolean(FailPref)}. + flag_fail_preferred=FailPref}. conv_from_projection_v1(#projection_v1{epoch_number=Epoch, epoch_csum=CSum, diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index 2bb5b20..a72c654 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -287,8 +287,6 @@ write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) -> Timeout) of {error, written}=Err -> Size = byte_size(Chunk), - NSInfo = undefined, - io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), case read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, undefined, Timeout) of {ok, {[{File, Offset, Chunk2, _}], []}} when Chunk2 == Chunk -> %% See equivalent comment inside write_projection(). diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 84e37ad..14b8005 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -559,8 +559,10 @@ wait_until_stable(ExpectedChainState, FLUNames, MgrNames, Retries, Verbose) -> FCList = fc_list(), wait_until_stable1(ExpectedChainState, TickFun, FCList, Retries, Verbose). -wait_until_stable1(_ExpectedChainState, _TickFun, FCList, 0, _Verbose) -> +wait_until_stable1(ExpectedChainState, _TickFun, FCList, 0, _Verbose) -> + ?V(" [ERROR] _ExpectedChainState ~p\n", [ExpectedChainState]), ?V(" [ERROR] wait_until_stable failed.... : ~p~n", [chain_state(FCList)]), + ?V(" [ERROR] norm.... : ~p~n", [normalize_chain_state(chain_state(FCList))]), false; wait_until_stable1(ExpectedChainState, TickFun, FCList, Reties, Verbose) -> [TickFun(3, 0, 100) || _ <- lists:seq(1, 3)], -- 2.45.2 From f09eef14ebcff419069eba6fa2681dba79f0f026 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 30 Dec 2015 15:54:19 +0900 Subject: [PATCH 11/53] Fix damn-syntactically-valid-not-found-by-dialyzer typo --- src/machi_pb_translate.erl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index b1b1ecd..92d1bee 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -378,7 +378,7 @@ from_pb_response(#mpb_ll_response{ 'OK' -> {ReqID, {ok, Epochs}}; _ -> - {ReqID< machi_pb_high_client:convert_general_status_code(Status)} + {ReqID, machi_pb_high_client:convert_general_status_code(Status)} end. %% No response for proj_kp/kick_projection_reaction -- 2.45.2 From a3fc1c3d689ab918b97cbd171a9190a3dca74a1c Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Thu, 31 Dec 2015 14:34:15 +0900 Subject: [PATCH 12/53] Add namespace info to wedge_status API call; add namespace enforcement @ machi_flu1_net_server --- include/machi.hrl | 2 +- src/machi.proto | 2 ++ src/machi_cr_client.erl | 2 +- src/machi_dt.erl | 2 +- src/machi_flu1_client.erl | 4 +-- src/machi_flu1_net_server.erl | 32 ++++++++++------- src/machi_pb_translate.erl | 51 ++++++++++++++------------- test/machi_chain_manager1_test.erl | 18 +++++----- test/machi_cr_client_test.erl | 8 ++--- test/machi_flu1_test.erl | 31 +++++++++++++--- test/machi_flu_psup_test.erl | 12 +++---- test/machi_proxy_flu1_client_test.erl | 2 +- 12 files changed, 99 insertions(+), 67 deletions(-) diff --git a/include/machi.hrl b/include/machi.hrl index 51eda4f..7974fd2 100644 --- a/include/machi.hrl +++ b/include/machi.hrl @@ -46,7 +46,7 @@ -record(ns_info, { version = 0 :: machi_dt:namespace_version(), - name = "" :: machi_dt:namespace(), + name = <<>> :: machi_dt:namespace(), locator = 0 :: machi_dt:locator() }). diff --git a/src/machi.proto b/src/machi.proto index 7c55136..a9ac513 100644 --- a/src/machi.proto +++ b/src/machi.proto @@ -507,6 +507,8 @@ message Mpb_LL_WedgeStatusResp { required Mpb_GeneralStatusCode status = 1; optional Mpb_EpochID epoch_id = 2; optional bool wedged_flag = 3; + optional uint32 namespace_version = 4; + optional string namespace = 5; } // Low level API: delete_migration() diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 9edf30f..7f5d726 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -477,7 +477,7 @@ witnesses_use_our_epoch([FLU|RestFLUs], Proxy = orddict:fetch(FLU, PD), %% Check both that the EpochID is the same *and* not wedged! case ?FLU_PC:wedge_status(Proxy, ?TIMEOUT) of - {ok, {false, EID}} when EID == EpochID -> + {ok, {false, EID,_,_}} when EID == EpochID -> witnesses_use_our_epoch(RestFLUs, S); _Else -> false diff --git a/src/machi_dt.erl b/src/machi_dt.erl index fc38eaf..de34b64 100644 --- a/src/machi_dt.erl +++ b/src/machi_dt.erl @@ -44,7 +44,7 @@ -type inet_host() :: inet:ip_address() | inet:hostname(). -type inet_port() :: inet:port_number(). -type locator() :: number(). --type namespace() :: string(). +-type namespace() :: binary(). -type namespace_version() :: non_neg_integer(). -type ns_info() :: #ns_info{}. -type projection() :: #projection_v1{}. diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index 3c28ce7..26b87cb 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -243,7 +243,7 @@ list_files(Host, TcpPort, EpochID) when is_integer(TcpPort) -> %% @doc Fetch the wedge status from the remote FLU. -spec wedge_status(port_wrap()) -> - {ok, {boolean(), machi_dt:epoch_id()}} | {error, term()}. + {ok, {boolean(), machi_dt:epoch_id(), machi_dt:namespace_version(),machi_dt:namespace()}} | {error, term()}. wedge_status(Sock) -> wedge_status2(Sock). @@ -251,7 +251,7 @@ wedge_status(Sock) -> %% @doc Fetch the wedge status from the remote FLU. -spec wedge_status(machi_dt:inet_host(), machi_dt:inet_port()) -> - {ok, {boolean(), machi_dt:epoch_id()}} | {error, term()}. + {ok, {boolean(), machi_dt:epoch_id(), machi_dt:namespace_version(),machi_dt:namespace()}} | {error, term()}. wedge_status(Host, TcpPort) when is_integer(TcpPort) -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), try diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 8ec237e..8247589 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -66,6 +66,10 @@ flu_name :: pv1_server(), %% Used in server_wedge_status to lookup the table epoch_tab :: ets:tab(), + %% Clustering: cluster map version number + namespace_version = 0 :: machi_dt:namespace_version(), + %% Clustering: my (and my chain's) assignment to a specific namespace + namespace = <<"">> :: machi_dt:namespace(), %% High mode only high_clnt :: pid(), @@ -228,6 +232,8 @@ do_pb_ll_request(PB_request, S) -> end, {machi_pb_translate:to_pb_response(ReqID, Cmd, Result), S2}. +%% do_pb_ll_request2(): Verification of epoch details & namespace details. + do_pb_ll_request2(NSVersion, NS, EpochID, CMD, S) -> {Wedged_p, CurrentEpochID} = lookup_epoch(S), if not is_tuple(EpochID) orelse tuple_size(EpochID) /= 2 -> @@ -238,26 +244,26 @@ do_pb_ll_request2(NSVersion, NS, EpochID, CMD, S) -> {Epoch, _} = EpochID, {CurrentEpoch, _} = CurrentEpochID, if Epoch < CurrentEpoch -> - ok; + {{error, bad_epoch}, S}; true -> - %% We're at same epoch # but different checksum, or - %% we're at a newer/bigger epoch #. _ = machi_flu1:wedge_myself(S#state.flu_name, CurrentEpochID), - ok - end, - {{error, bad_epoch}, S#state{epoch_id=CurrentEpochID}}; + {{error, wedged}, S#state{epoch_id=CurrentEpochID}} + end; true -> - do_pb_ll_request2b(CMD, S#state{epoch_id=CurrentEpochID}) + #state{namespace_version=MyNSVersion, namespace=MyNS} = S, + if NSVersion /= MyNSVersion -> + {{error, bad_epoch}, S}; + NS /= MyNS -> + {{error, bad_arg}, S}; + true -> + do_pb_ll_request3(CMD, S) + end end. lookup_epoch(#state{epoch_tab=T}) -> %% TODO: race in shutdown to access ets table after owner dies ets:lookup_element(T, epoch, 2). -do_pb_ll_request2b(CMD, S) -> - io:format(user, "TODO: check NSVersion & NS\n", []), - do_pb_ll_request3(CMD, S). - %% Witness status does not matter below. do_pb_ll_request3({low_echo, Msg}, S) -> {Msg, S}; @@ -463,14 +469,14 @@ do_server_list_files(#state{data_dir=DataDir}=_S) -> {Size, File} end || File <- Files]}. -do_server_wedge_status(S) -> +do_server_wedge_status(#state{namespace_version=NSVersion, namespace=NS}=S) -> {Wedged_p, CurrentEpochID0} = lookup_epoch(S), CurrentEpochID = if CurrentEpochID0 == undefined -> ?DUMMY_PV1_EPOCH; true -> CurrentEpochID0 end, - {Wedged_p, CurrentEpochID}. + {Wedged_p, CurrentEpochID, NSVersion, NS}. do_server_delete_migration(File, #state{data_dir=DataDir}=_S) -> case sanitize_file_string(File) of diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 92d1bee..20aa897 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -54,12 +54,13 @@ from_pb_request(#mpb_ll_request{ req_id=ReqID, append_chunk=IR=#mpb_ll_appendchunkreq{ namespace_version=NSVersion, - namespace=NS, + namespace=NS_str, locator=NSLocator, epoch_id=PB_EpochID, prefix=Prefix, chunk=Chunk, csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}) -> + NS = list_to_binary(NS_str), EpochID = conv_to_epoch_id(PB_EpochID), CSum_tag = conv_to_csum_tag(CSum_type), Opts = conv_to_append_opts(IR), @@ -71,12 +72,13 @@ from_pb_request(#mpb_ll_request{ req_id=ReqID, write_chunk=#mpb_ll_writechunkreq{ namespace_version=NSVersion, - namespace=NS, + namespace=NS_str, epoch_id=PB_EpochID, chunk=#mpb_chunk{file_name=File, offset=Offset, chunk=Chunk, csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}}) -> + NS = list_to_binary(NS_str), EpochID = conv_to_epoch_id(PB_EpochID), CSum_tag = conv_to_csum_tag(CSum_type), {ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}}; @@ -84,12 +86,13 @@ from_pb_request(#mpb_ll_request{ req_id=ReqID, read_chunk=#mpb_ll_readchunkreq{ namespace_version=NSVersion, - namespace=NS, + namespace=NS_str, epoch_id=PB_EpochID, chunk_pos=ChunkPos, flag_no_checksum=PB_GetNoChecksum, flag_no_chunk=PB_GetNoChunk, flag_needs_trimmed=PB_NeedsTrimmed}}) -> + NS = list_to_binary(NS_str), EpochID = conv_to_epoch_id(PB_EpochID), Opts = #read_opts{no_checksum=PB_GetNoChecksum, no_chunk=PB_GetNoChunk, @@ -102,12 +105,13 @@ from_pb_request(#mpb_ll_request{ req_id=ReqID, trim_chunk=#mpb_ll_trimchunkreq{ namespace_version=NSVersion, - namespace=NS, + namespace=NS_str, epoch_id=PB_EpochID, file=File, offset=Offset, size=Size, trigger_gc=TriggerGC}}) -> + NS = list_to_binary(NS_str), EpochID = conv_to_epoch_id(PB_EpochID), {ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}}; from_pb_request(#mpb_ll_request{ @@ -179,10 +183,11 @@ from_pb_request(#mpb_request{req_id=ReqID, {ReqID, {high_auth, User, Pass}}; from_pb_request(#mpb_request{req_id=ReqID, append_chunk=IR=#mpb_appendchunkreq{}}) -> - #mpb_appendchunkreq{namespace=NS, + #mpb_appendchunkreq{namespace=NS_str, prefix=Prefix, chunk=Chunk, csum=CSum} = IR, + NS = list_to_binary(NS_str), TaggedCSum = make_tagged_csum(CSum, Chunk), Opts = conv_to_append_opts(IR), {ReqID, {high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}}; @@ -310,9 +315,16 @@ from_pb_response(#mpb_ll_response{ from_pb_response(#mpb_ll_response{ req_id=ReqID, wedge_status=#mpb_ll_wedgestatusresp{ - epoch_id=PB_EpochID, wedged_flag=Wedged_p}}) -> + status=Status, + epoch_id=PB_EpochID, wedged_flag=Wedged_p, + namespace_version=NSVersion, namespace=NS_str}}) -> + GeneralStatus = case machi_pb_high_client:convert_general_status_code(Status) of + ok -> ok; + _Else -> {yukky, _Else} + end, EpochID = conv_to_epoch_id(PB_EpochID), - {ReqID, {ok, {Wedged_p, EpochID}}}; + NS = list_to_binary(NS_str), + {ReqID, {GeneralStatus, {Wedged_p, EpochID, NSVersion, NS}}}; from_pb_response(#mpb_ll_response{ req_id=ReqID, delete_migration=#mpb_ll_deletemigrationresp{ @@ -511,7 +523,7 @@ to_pb_response(ReqID, {low_skip_wedge, {low_echo, _Msg}}, Resp) -> #mpb_ll_response{ req_id=ReqID, echo=#mpb_echoresp{message=Resp}}; -to_pb_response(ReqID, {low_skip_wedige, {low_auth, _, _}}, __TODO_Resp) -> +to_pb_response(ReqID, {low_skip_wedge, {low_auth, _, _}}, __TODO_Resp) -> #mpb_ll_response{req_id=ReqID, generic=#mpb_errorresp{code=1, msg="AUTH not implemented"}}; @@ -608,13 +620,16 @@ to_pb_response(ReqID, {low_skip_wedge, {low_wedge_status}}, Resp) -> Status = conv_from_status(Error), #mpb_ll_response{req_id=ReqID, wedge_status=#mpb_ll_wedgestatusresp{status=Status}}; - {Wedged_p, EpochID} -> + {Wedged_p, EpochID, NSVersion, NS} -> PB_EpochID = conv_from_epoch_id(EpochID), #mpb_ll_response{req_id=ReqID, wedge_status=#mpb_ll_wedgestatusresp{ status='OK', epoch_id=PB_EpochID, - wedged_flag=Wedged_p}} + wedged_flag=Wedged_p, + namespace_version=NSVersion, + namespace=NS + }} end; to_pb_response(ReqID, {low_skip_wedge, {low_delete_migration, _EID, _Fl}}, Resp)-> Status = conv_from_status(Resp), @@ -807,12 +822,12 @@ make_tagged_csum(#mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA', csum=CSum}, _CB) -> make_ll_error_resp(ReqID, Code, Msg) -> #mpb_ll_response{req_id=ReqID, generic=#mpb_errorresp{code=Code, - msg=Msg}}. + msg=Msg}}. make_error_resp(ReqID, Code, Msg) -> #mpb_response{req_id=ReqID, generic=#mpb_errorresp{code=Code, - msg=Msg}}. + msg=Msg}}. conv_from_epoch_id({Epoch, EpochCSum}) -> #mpb_epochid{epoch_number=Epoch, @@ -972,18 +987,6 @@ conv_from_status(_OOPS) -> io:format(user, "HEY, ~s:~w got ~p\n", [?MODULE, ?LINE, _OOPS]), 'BAD_JOSS'. -conv_to_boolean(undefined) -> - false; -conv_to_boolean(0) -> - false; -conv_to_boolean(N) when is_integer(N) -> - true. - -conv_from_boolean(false) -> - 0; -conv_from_boolean(true) -> - 1. - conv_from_append_opts(#append_opts{chunk_extra=ChunkExtra, preferred_file_name=Pref, flag_fail_preferred=FailPref}) -> diff --git a/test/machi_chain_manager1_test.erl b/test/machi_chain_manager1_test.erl index 02010ff..80296d2 100644 --- a/test/machi_chain_manager1_test.erl +++ b/test/machi_chain_manager1_test.erl @@ -401,7 +401,7 @@ nonunanimous_setup_and_fix_test2() -> Mb, ChainName, TheEpoch_3, ap_mode, MembersDict4, []), Advance(), - {ok, {true, _}} = ?FLU_PC:wedge_status(Proxy_a), + {ok, {true, _,_,_}} = ?FLU_PC:wedge_status(Proxy_a), {_, _, TheEpoch_4} = ?MGR:trigger_react_to_env(Mb), {_, _, TheEpoch_4} = ?MGR:trigger_react_to_env(Mc), [{ok, #projection_v1{upi=[b,c], repairing=[]}} = @@ -451,9 +451,9 @@ nonunanimous_setup_and_fix_test2() -> #p_srvr{name=NameA} = hd(Ps), {ok,_}=machi_flu_psup:start_flu_package(NameA, TcpPort+1, hd(Dirs), Opts), Advance(), - {ok, {true, _}} = ?FLU_PC:wedge_status(Proxy_a), - {ok, {false, EpochID_8}} = ?FLU_PC:wedge_status(Proxy_b), - {ok, {false, EpochID_8}} = ?FLU_PC:wedge_status(Proxy_c), + {ok, {true, _,_,_}} = ?FLU_PC:wedge_status(Proxy_a), + {ok, {false, EpochID_8,_,_}} = ?FLU_PC:wedge_status(Proxy_b), + {ok, {false, EpochID_8,_,_}} = ?FLU_PC:wedge_status(Proxy_c), [{ok, #projection_v1{upi=[b,c], repairing=[]}} = ?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- tl(Proxies)], @@ -463,8 +463,8 @@ nonunanimous_setup_and_fix_test2() -> ok = machi_flu_psup:stop_flu_package(a), Advance(), machi_flu1_test:clean_up_data_dir(hd(Dirs)), - {ok, {false, _}} = ?FLU_PC:wedge_status(Proxy_b), - {ok, {false, _}} = ?FLU_PC:wedge_status(Proxy_c), + {ok, {false, _,_,_}} = ?FLU_PC:wedge_status(Proxy_b), + {ok, {false, _,_,_}} = ?FLU_PC:wedge_status(Proxy_c), %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% io:format("STEP: Add a to the chain again (a is stopped).\n", []), @@ -482,9 +482,9 @@ nonunanimous_setup_and_fix_test2() -> {ok,_}=machi_flu_psup:start_flu_package(NameA, TcpPort+1, hd(Dirs), Opts), Advance(), - {ok, {false, {TheEpoch10,_}}} = ?FLU_PC:wedge_status(Proxy_a), - {ok, {false, {TheEpoch10,_}}} = ?FLU_PC:wedge_status(Proxy_b), - {ok, {false, {TheEpoch10,_}}} = ?FLU_PC:wedge_status(Proxy_c), + {ok, {false, {TheEpoch10,_},_,_}} = ?FLU_PC:wedge_status(Proxy_a), + {ok, {false, {TheEpoch10,_},_,_}} = ?FLU_PC:wedge_status(Proxy_b), + {ok, {false, {TheEpoch10,_},_,_}} = ?FLU_PC:wedge_status(Proxy_c), [{ok, #projection_v1{upi=[b,c], repairing=[a]}} = ?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- Proxies], ok diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index 4228913..885fb35 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -259,15 +259,15 @@ witness_smoke_test2() -> %% Let's wedge OurWitness and see what happens: timeout/partition. #p_srvr{name=WitName, address=WitA, port=WitP} = orddict:fetch(OurWitness, D), - {ok, {false, EpochID2}} = machi_flu1_client:wedge_status(WitA, WitP), + {ok, {false, EpochID2,_,_}} = machi_flu1_client:wedge_status(WitA, WitP), machi_flu1:wedge_myself(WitName, EpochID2), case machi_flu1_client:wedge_status(WitA, WitP) of - {ok, {true, EpochID2}} -> + {ok, {true, EpochID2,_,_}} -> ok; - {ok, {false, EpochID2}} -> + {ok, {false, EpochID2,_,_}} -> %% This is racy. Work around it by sleeping a while. timer:sleep(6*1000), - {ok, {true, EpochID2}} = + {ok, {true, EpochID2,_,_}} = machi_flu1_client:wedge_status(WitA, WitP) end, diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index 556f63e..fc34a51 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -104,7 +104,7 @@ flu_smoke_test() -> {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, BadFile), {ok, []} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), - {ok, {false, _}} = ?FLU_C:wedge_status(Host, TcpPort), + {ok, {false, _,_,_}} = ?FLU_C:wedge_status(Host, TcpPort), Chunk1 = <<"yo!">>, {ok, {Off1,Len1,File1}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, @@ -173,6 +173,28 @@ flu_smoke_test() -> NSInfo, ?DUMMY_PV1_EPOCH, BadFile, Off2, Len2, noopt), + %% Make a connected socket. + Sock1 = ?FLU_C:connect(#p_srvr{address=Host, port=TcpPort}), + + %% Let's test some cluster version enforcement. + Good_EpochNum = 0, + Good_NSVersion = 0, + Good_NS = <<>>, + {ok, {false, {Good_EpochNum,_}, Good_NSVersion, GoodNS}} = + ?FLU_C:wedge_status(Sock1), + NS_good = #ns_info{version=Good_NSVersion, name=Good_NS}, + {ok, {[{_, Off2, Chunk2, _}], _}} = + ?FLU_C:read_chunk(Sock1, NS_good, ?DUMMY_PV1_EPOCH, + File2, Off2, Len2, noopt), + NS_bad_version = #ns_info{version=1, name=Good_NS}, + NS_bad_name = #ns_info{version=Good_NSVersion, name= <<"foons">>}, + {error, bad_epoch} = + ?FLU_C:read_chunk(Sock1, NS_bad_version, ?DUMMY_PV1_EPOCH, + File2, Off2, Len2, noopt), + {error, bad_arg} = + ?FLU_C:read_chunk(Sock1, NS_bad_name, ?DUMMY_PV1_EPOCH, + File2, Off2, Len2, noopt), + %% We know that File1 still exists. Pretend that we've done a %% migration and exercise the delete_migration() API. ok = ?FLU_C:delete_migration(Host, TcpPort, ?DUMMY_PV1_EPOCH, File1), @@ -188,8 +210,7 @@ flu_smoke_test() -> {error, bad_arg} = ?FLU_C:trunc_hack(Host, TcpPort, ?DUMMY_PV1_EPOCH, BadFile), - ok = ?FLU_C:quit(?FLU_C:connect(#p_srvr{address=Host, - port=TcpPort})) + ok = ?FLU_C:quit(Sock1) after machi_test_util:stop_flu_package() end. @@ -202,7 +223,7 @@ flu_projection_smoke_test() -> try [ok = flu_projection_common(Host, TcpPort, T) || T <- [public, private] ] -%% , {ok, {false, EpochID1}} = ?FLU_C:wedge_status(Host, TcpPort), +%% , {ok, {false, EpochID1,_,_}} = ?FLU_C:wedge_status(Host, TcpPort), %% io:format(user, "EpochID1 ~p\n", [EpochID1]) after machi_test_util:stop_flu_package() @@ -278,7 +299,7 @@ witness_test() -> File, 9999, 9999, noopt), {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, File), {error, bad_arg} = ?FLU_C:list_files(Host, TcpPort, EpochID1), - {ok, {false, EpochID1}} = ?FLU_C:wedge_status(Host, TcpPort), + {ok, {false, EpochID1,_,_}} = ?FLU_C:wedge_status(Host, TcpPort), {ok, _} = ?FLU_C:get_latest_epochid(Host, TcpPort, public), {ok, _} = ?FLU_C:read_latest_projection(Host, TcpPort, public), {error, not_written} = ?FLU_C:read_projection(Host, TcpPort, diff --git a/test/machi_flu_psup_test.erl b/test/machi_flu_psup_test.erl index 5c068af..bc43437 100644 --- a/test/machi_flu_psup_test.erl +++ b/test/machi_flu_psup_test.erl @@ -94,13 +94,13 @@ partial_stop_restart2() -> end, try [Start(P) || P <- Ps], - [{ok, {true, _}} = WedgeStatus(P) || P <- Ps], % all are wedged + [{ok, {true, _,_,_}} = WedgeStatus(P) || P <- Ps], % all are wedged [{error,wedged} = Append(P, ?DUMMY_PV1_EPOCH) || P <- Ps], % all are wedged [machi_chain_manager1:set_chain_members(ChMgr, Dict) || ChMgr <- ChMgrs ], - {ok, {false, EpochID1}} = WedgeStatus(hd(Ps)), - [{ok, {false, EpochID1}} = WedgeStatus(P) || P <- Ps], % *not* wedged + {ok, {false, EpochID1,_,_}} = WedgeStatus(hd(Ps)), + [{ok, {false, EpochID1,_,_}} = WedgeStatus(P) || P <- Ps], % *not* wedged [{ok,_} = Append(P, EpochID1) || P <- Ps], % *not* wedged {ok, {_,_,File1}} = Append(hd(Ps), EpochID1), @@ -126,9 +126,9 @@ partial_stop_restart2() -> Epoch_m = Proj_m#projection_v1.epoch_number, %% Confirm that all FLUs are *not* wedged, with correct proj & epoch Proj_mCSum = Proj_m#projection_v1.epoch_csum, - [{ok, {false, {Epoch_m, Proj_mCSum}}} = WedgeStatus(P) || % *not* wedged + [{ok, {false, {Epoch_m, Proj_mCSum},_,_}} = WedgeStatus(P) || % *not* wedged P <- Ps], - {ok, {false, EpochID1}} = WedgeStatus(hd(Ps)), + {ok, {false, EpochID1,_,_}} = WedgeStatus(hd(Ps)), [{ok,_} = Append(P, EpochID1) || P <- Ps], % *not* wedged %% Stop all but 'a'. @@ -160,7 +160,7 @@ partial_stop_restart2() -> {now_using,_,Epoch_n} = machi_chain_manager1:trigger_react_to_env( hd(ChMgrs)), true = (Epoch_n > Epoch_m), - {ok, {false, EpochID3}} = WedgeStatus(hd(Ps)), + {ok, {false, EpochID3,_,_}} = WedgeStatus(hd(Ps)), %% The file we're assigned should be different with the epoch change. {ok, {_,_,File3}} = Append(hd(Ps), EpochID3), true = (File1 /= File3), diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index 66c52f4..617afff 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -89,7 +89,7 @@ io:format(user, "\nTODO: fix write_chunk() call below @ ~s LINE ~w\n", [?MODULE, BadFile = <<"no-such-file">>, {error, bad_arg} = ?MUT:checksum_list(Prox1, BadFile), {ok, [_|_]} = ?MUT:list_files(Prox1, FakeEpoch), - {ok, {false, _}} = ?MUT:wedge_status(Prox1), + {ok, {false, _,_,_}} = ?MUT:wedge_status(Prox1), {ok, {0, _SomeCSum}} = ?MUT:get_latest_epochid(Prox1, public), {ok, #projection_v1{epoch_number=0}} = ?MUT:read_latest_projection(Prox1, public), -- 2.45.2 From 3b594504fe82a565dbabe6862a472b4cf0976e11 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Thu, 31 Dec 2015 15:44:51 +0900 Subject: [PATCH 13/53] Client API module edoc added, see also http://www.snookles.com/scotttmp/IMG_7279-copy-copy.jpg --- src/machi_cr_client.erl | 63 ++------------------------------ src/machi_flu1_client.erl | 65 +++++++++++++++++++++++++++++++++ src/machi_pb_high_client.erl | 4 ++ src/machi_proxy_flu1_client.erl | 4 ++ 4 files changed, 76 insertions(+), 60 deletions(-) diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 7f5d726..e77f9e2 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -21,8 +21,9 @@ %% @doc Erlang API for the Machi client-implemented Chain Replication %% (CORFU-style) protocol. %% -%% See also the docs for {@link machi_flu1_client} for additional -%% details on data types and operation descriptions. +%% Please see {@link machi_flu1_client} the "Client API implemntation notes" +%% section for how this module relates to the rest of the client API +%% implementation. %% %% The API here is much simpler than the {@link machi_flu1_client} or %% {@link machi_proxy_flu1_client} APIs. This module's API is a @@ -43,64 +44,6 @@ %% %% Doc TODO: Once this API stabilizes, add all relevant data type details %% to the EDoc here. -%% -%% -%% === Missing API features === -%% -%% So far, there is one missing client API feature that ought to be -%% added to Machi in the near future: more flexible checksum -%% management. -%% -%% Add a `source' annotation to all checksums to indicate where the -%% checksum was calculated. For example, -%% -%%
    -%% -%%
  • Calculated by client that performed the original chunk append, -%%
  • -%% -%%
  • Calculated by the 1st Machi server to receive an -%% un-checksummed append request -%%
  • -%% -%%
  • Re-calculated by Machi to manage fewer checksums of blocks of -%% data larger than the original client-specified chunks. -%%
  • -%%
-%% -%% Client-side checksums would be the "strongest" type of -%% checksum, meaning that any data corruption (of the original -%% data and/or of the checksum itself) can be detected after the -%% client-side calculation. There are too many horror stories on -%% The Net about IP PDUs that are corrupted but unnoticed due to -%% weak TCP checksums, buggy hardware, buggy OS drivers, etc. -%% Checksum versioning is also desirable if/when the current checksum -%% implementation changes from SHA-1 to something else. -%% -%% -%% === Implementation notes === -%% -%% The major operation processing is implemented in a state machine-like -%% manner. Before attempting an operation `X', there's an initial -%% operation `pre-X' that takes care of updating the epoch id, -%% restarting client protocol proxies, and if there's any server -%% instability (e.g. some server is wedged), then insert some sleep -%% time. When the chain appears to have stabilized, then we try the `X' -%% operation again. -%% -%% Function name for the `pre-X' stuff is usually `X()', and the -%% function name for the `X' stuff is usually `X2()'. (I.e., the `X' -%% stuff follows after `pre-X' and therefore has a `2' suffix on the -%% function name.) -%% -%% In the case of read repair, there are two stages: find the value to -%% perform the repair, then perform the repair writes. In the case of -%% the repair writes, the `pre-X' function is named `read_repair3()', -%% and the `X' function is named `read_repair4()'. -%% -%% TODO: It would be nifty to lift the very-nearly-but-not-quite-boilerplate -%% of the `pre-X' functions into a single common function ... but I'm not -%% sure yet on how to do it without making the code uglier. -module(machi_cr_client). diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index 26b87cb..bf36fee 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -38,6 +38,71 @@ %% TODO This EDoc was written first, and the EDoc and also `-type' and %% `-spec' definitions for {@link machi_proxy_flu1_client} and {@link %% machi_cr_client} must be improved. +%% +%% == Client API implementation notes == +%% +%% At the moment, there are several modules that implement various +%% subsets of the Machi API. The table below attempts to show how and +%% why they differ. +%% +%% ``` +%% |--------------------------+-------+-----+------+------+-------+----------------| +%% | | PB | | # | | Conn | Epoch & NS | +%% | Module name | Level | CR? | FLUS | Impl | Life? | version aware? | +%% |--------------------------+-------+-----+------+------+-------+----------------| +%% | machi_pb_high_api_client | high | yes | many | proc | long | no | +%% | machi_cr_client | low | yes | many | proc | long | no | +%% | machi_proxy_flu1_client | low | no | 1 | proc | long | yes | +%% | machi_flu1_client | low | no | 1 | lib | short | yes | +%% |--------------------------+-------+-----+------+------+-------+----------------| +%% ''' +%% +%% In terms of use and API layering, the table rows are in highest`->'lowest +%% order: each level calls the layer immediately below it. +%% +%%
+%%
PB Level
+%%
The Protocol Buffers API is divided logically into two levels, +%% "low" and "high". The low-level protocol is used for intra-chain +%% communication. The high-level protocol is used for clients outside +%% of a Machi chain or Machi cluster of chains. +%%
+%%
CR?
+%%
Does this API support (directly or indirectly) Chain +%% Replication? If `no', then the API has no awareness of multiple +%% replicas of any file or file chunk; unaware clients can only +%% perform operations at a single Machi FLU's file service or +%% projection store service. +%%
+%%
# FLUs
+%%
Now many FLUs does this API layer communicate with +%% simultaneously? Note that there is a one-to-one correspondence +%% between this value and the "CR?" column's value. +%%
+%%
Impl
+%%
Implementation: library-only or an Erlang process, +%% e.g., `gen_server'. +%%
+%%
Conn Life?
+%%
Expected TCP session connection life: short or long. At the +%% lowest level, the {@link machi_flu1_client} API implementation takes +%% no effort to reconnect to a remote FLU when its single TCP session +%% is broken. For long-lived connection life APIs, the server side will +%% automatically attempt to reconnect to remote FLUs when a TCP session +%% is broken. +%%
+%%
Epoch & NS version aware?
+%%
Are clients of this API responsible for knowing a chain's EpochID +%% and namespace version numbers? If `no', then the server side of the +%% API will automatically attempt to discover/re-discover the EpochID and +%% namespace version numbers whenever they change. +%%
+%%
+%% +%% The only protocol that we expect to be used by entities outside of +%% a single Machi chain or a multi-chain cluster is the "high" +%% Protocol Buffers API. The {@link riak_pb_high_api_client} module +%% is an Erlang reference implementation of this PB API. -module(machi_flu1_client). diff --git a/src/machi_pb_high_client.erl b/src/machi_pb_high_client.erl index 37c513e..85600bd 100644 --- a/src/machi_pb_high_client.erl +++ b/src/machi_pb_high_client.erl @@ -25,6 +25,10 @@ %% to a single socket connection, and there is no code to deal with %% multiple connections/load balancing/error handling to several/all %% Machi cluster servers. +%% +%% Please see {@link machi_flu1_client} the "Client API implemntation notes" +%% section for how this module relates to the rest of the client API +%% implementation. -module(machi_pb_high_client). diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index a72c654..5a85cd3 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -22,6 +22,10 @@ %% proxy-process style API for hiding messy details such as TCP %% connection/disconnection with the remote Machi server. %% +%% Please see {@link machi_flu1_client} the "Client API implemntation notes" +%% section for how this module relates to the rest of the client API +%% implementation. +%% %% Machi is intentionally avoiding using distributed Erlang for %% Machi's communication. This design decision makes Erlang-side code %% more difficult & complex, but it's the price to pay for some -- 2.45.2 From 3b82dc2e38351bdfcfd1cc00ae5c0bc5503238ad Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Thu, 31 Dec 2015 16:20:38 +0900 Subject: [PATCH 14/53] 'Thread through' FLU props to machi_flu1_net_server --- src/machi_flu1.erl | 8 +++++--- src/machi_flu1_net_server.erl | 12 ++++++++---- src/machi_flu1_subsup.erl | 17 ++++++++++------- 3 files changed, 23 insertions(+), 14 deletions(-) diff --git a/src/machi_flu1.erl b/src/machi_flu1.erl index b75d955..8a33a04 100644 --- a/src/machi_flu1.erl +++ b/src/machi_flu1.erl @@ -129,7 +129,8 @@ main2(FluName, TcpPort, DataDir, Props) -> ok end, {ok, ListenerPid} = start_listen_server(FluName, TcpPort, Witness_p, DataDir, - ets_table_name(FluName), ProjectionPid), + ets_table_name(FluName), ProjectionPid, + Props), %% io:format(user, "Listener started: ~w~n", [{FluName, ListenerPid}]), Config_e = machi_util:make_config_filename(DataDir, "unused"), @@ -154,9 +155,10 @@ main2(FluName, TcpPort, DataDir, Props) -> start_append_server(FluName, Witness_p, Wedged_p, EpochId) -> machi_flu1_subsup:start_append_server(FluName, Witness_p, Wedged_p, EpochId). -start_listen_server(FluName, TcpPort, Witness_p, DataDir, EtsTab, ProjectionPid) -> +start_listen_server(FluName, TcpPort, Witness_p, DataDir, EtsTab, ProjectionPid, + Props) -> machi_flu1_subsup:start_listener(FluName, TcpPort, Witness_p, DataDir, - EtsTab, ProjectionPid). + EtsTab, ProjectionPid, Props). %% This is the name of the projection store that is spawned by the %% *flu*, for use primarily in testing scenarios. In normal use, we diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 8247589..5bbabba 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -69,20 +69,22 @@ %% Clustering: cluster map version number namespace_version = 0 :: machi_dt:namespace_version(), %% Clustering: my (and my chain's) assignment to a specific namespace - namespace = <<"">> :: machi_dt:namespace(), + namespace = <<>> :: machi_dt:namespace(), %% High mode only high_clnt :: pid(), %% anything you want - props = [] :: list() % proplist + props = [] :: proplists:proplist() }). -type socket() :: any(). -type state() :: #state{}. -spec start_link(ranch:ref(), socket(), module(), [term()]) -> {ok, pid()}. -start_link(Ref, Socket, Transport, [FluName, Witness, DataDir, EpochTab, ProjStore]) -> +start_link(Ref, Socket, Transport, [FluName, Witness, DataDir, EpochTab, ProjStore, Props]) -> + NS = proplists:get_value(namespace, Props, <<>>), + true = is_binary(NS), proc_lib:start_link(?MODULE, init, [#state{ref=Ref, socket=Socket, transport=Transport, @@ -90,7 +92,9 @@ start_link(Ref, Socket, Transport, [FluName, Witness, DataDir, EpochTab, ProjSto witness=Witness, data_dir=DataDir, epoch_tab=EpochTab, - proj_store=ProjStore}]). + proj_store=ProjStore, + namespace=NS, + props=Props}]). -spec init(state()) -> no_return(). init(#state{ref=Ref, socket=Socket, transport=Transport}=State) -> diff --git a/src/machi_flu1_subsup.erl b/src/machi_flu1_subsup.erl index 21fd6f5..566c118 100644 --- a/src/machi_flu1_subsup.erl +++ b/src/machi_flu1_subsup.erl @@ -36,7 +36,7 @@ -export([start_link/1, start_append_server/4, stop_append_server/1, - start_listener/6, + start_listener/7, stop_listener/1, subsup_name/1, listener_name/1]). @@ -67,11 +67,13 @@ stop_append_server(FluName) -> ok = supervisor:delete_child(SubSup, FluName). -spec start_listener(pv1_server(), inet:port_number(), boolean(), - string(), ets:tab(), atom() | pid()) -> {ok, pid()}. -start_listener(FluName, TcpPort, Witness, DataDir, EpochTab, ProjStore) -> + string(), ets:tab(), atom() | pid(), + proplists:proplist()) -> {ok, pid()}. +start_listener(FluName, TcpPort, Witness, DataDir, EpochTab, ProjStore, + Props) -> supervisor:start_child(subsup_name(FluName), listener_spec(FluName, TcpPort, Witness, DataDir, - EpochTab, ProjStore)). + EpochTab, ProjStore, Props)). -spec stop_listener(pv1_server()) -> ok. stop_listener(FluName) -> @@ -97,12 +99,13 @@ init([]) -> %% private -spec listener_spec(pv1_server(), inet:port_number(), boolean(), - string(), ets:tab(), atom() | pid()) -> supervisor:child_spec(). -listener_spec(FluName, TcpPort, Witness, DataDir, EpochTab, ProjStore) -> + string(), ets:tab(), atom() | pid(), + proplists:proplist()) -> supervisor:child_spec(). +listener_spec(FluName, TcpPort, Witness, DataDir, EpochTab, ProjStore, Props) -> ListenerName = listener_name(FluName), NbAcceptors = 10, TcpOpts = [{port, TcpPort}, {backlog, ?BACKLOG}], - NetServerOpts = [FluName, Witness, DataDir, EpochTab, ProjStore], + NetServerOpts = [FluName, Witness, DataDir, EpochTab, ProjStore, Props], ranch:child_spec(ListenerName, NbAcceptors, ranch_tcp, TcpOpts, machi_flu1_net_server, NetServerOpts). -- 2.45.2 From 2fddf2ec2d3b14ef304bd5546beeced2eaebb1f6 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 29 Jan 2016 15:10:00 +0900 Subject: [PATCH 15/53] Tweak make-faq.pl --- priv/make-faq.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/priv/make-faq.pl b/priv/make-faq.pl index 7edee07..b7a3089 100755 --- a/priv/make-faq.pl +++ b/priv/make-faq.pl @@ -36,7 +36,7 @@ while () { $indent = " " x ($count * 4); s/^#*\s*[0-9. ]*//; $anchor = "n$label"; - printf T1 "%s+ [%s %s](#%s)\n", $indent, $label, $_, $anchor; + printf T1 "%s+ [%s. %s](#%s)\n", $indent, $label, $_, $anchor; printf T2 "
\n", $anchor; $line =~ s/(#+)\s*[0-9. ]*/$1 $label. /; print T2 $line; -- 2.45.2 From 202ace33d3792c30d5ae9b8140193af6d476e0cd Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 29 Jan 2016 16:40:34 +0900 Subject: [PATCH 16/53] Add doc/process-protocol-module-overview.jpg --- doc/process-protocol-module-overview.jpg | Bin 0 -> 118170 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 doc/process-protocol-module-overview.jpg diff --git a/doc/process-protocol-module-overview.jpg b/doc/process-protocol-module-overview.jpg new file mode 100644 index 0000000000000000000000000000000000000000..eb7accf23780281b550910c97339bd090876030b GIT binary patch literal 118170 zcmeFZcU%<9w=dc~!!R&p7;=<2M9Db}Il~Yn8;~3&CqWSeVI+g7s4$XI!9b9tk`W}M zsDMb4q#!6D0>bOT?cV#`&pG$qcmI9AYNz|#t5;R6S`~V&RgH)54yOTHBRxYs00cq6 z06Kuf8DyAlgqJ%2n3@720058yD3At#L5LLkC4#g^7zN=JkRE_TR}eY?kPSc_VF1Vl z+5g7*Am%R~s02{zC@+bUgfsw3A7M00h$2;zHWdQ&hwnV z8;^W2kpxLetnv3%T2e|*SxQ=2Mv6yDL0M8-SxN~2c;fQ@NC3me-T50EB=P;k*fG{|a1p-73KvKXWvS5FgADadLo1TbLOHswME#3S#{f|O!O&Cm-fwKX= zUcn)flIC>wxfV*{B%YLE#;gNz_6$PMy<$3T9FMghX1dphX10sI6gKpN;7 zFLVe%hY^O?_G@ zH!oD~9`tkv*ao&BbO~2dI@_pf+zG$+QD3Lgl^% znt?IkH82Npe1s@pp{GBOWcm)}nuYQV0|P)Sbj$!95a;rdwBLbmP&w03j$x?YPT&P} zOhI*jgmMryNj$zn(oH}zbpkJeE~utf=-dj`)(Vx=0F~Vj)%q4Hu?M2G0@cv{AjDA* zaSs6PP`!&#-;6-CCZHC01U(;u?%qQ8uYf+N)-i|=8b3z~MjXAw^ym8f4uDbq87;4& z<%I-6iX=PX?(6Ikd?L)*H`MKfTfjLNPx}xrKeu2PXJ0p3kO3Nr=dQdJ@TUdnRq}cP zGCF2XKJnv<%D)Rdn%G209I5|K8i$8JLLqS$pcw|u>-(XHhkNG%fYAkFI+|7gx*pA1 z=>F*XzpTSNXPsSqcwC(Q+=85WWF@5J{QP*_k4r-fI{?r?ts}lKU}n$>Ld5q1G;>J5 zLFXg7E|E_1hmIt!fkb`$)(Pnm9l}UnXa@e37byZz{?2Pm%uDq*j~4LzeFQTl(&_%D z8v)F}`H$!o*c>UAWC@b4?)2Xu9XepmmZi~hb++W)58 zv_lOfVq*UiV^kMnnLlbGHzlvu++<{%X8mn_lqpc)6`1$$=D@UmD9u=+(VIuiBFV7K6 zh>setohgn-`&^J4kDP?8gd{JNJIK{t*-}UEue{Km8t-31hKGksgv&^r3-UNFrKF^E zTvGbDw6r+HAs!s*AL1M#?jOwervx2n)CPG4gm|6v=OLEpeD+*uh#D_c^KX&-0{$-c zzis;;#sJTAA?JcU&;1YVe?LA_7%@|TvgtWjFZW0t=MXnFUZUE?rKH7Wq^$p#3cs1O zgWQ}$&IMVYJLjv$3mGT>qPqQ$StX$NF)z>Gbt)4Xj|y^wy5zs9>WKe1q|@VnDU}HS z?j3cW|D^wD;6EDpj|Tpuf&XaWKN|S|p9cQ1@7(+$3nCn{-hjhZAOebf{5E=cWF#a3 zMJ+>9k|XyL(BJ?a064pVJ_lrU!T=ZtOama5QjnCEQsRKA17t)SL`qsliUTq{|8=AK zhv|1&0!Jn&Z3^Fa@$&*Z{>Q7_)hJEqY7S>9?F_ZGPvb1j^$bmPA%_nDC=E=!{LjIl zu!Wz0NRXwz7LT310}pZls+=5hoTUK3*(Er@)Jo5qSf-(_Hcv3bbX5M|r|q%dX7Q-F z0gi{~Z}R^cpmPZb3W0289K0)|T23R)zcmN{@fxv;V>&ZV`~20HA#?ATr3y!!v{j z<08xh+0u$U25#ZLZXqG!ILO|24swNTa=!p)e<&F8yUoO006kH+Jdl#*rIh65#ib=6 z_5Ypzw-5j3`riX$-~Q(KZgJFSAXDEzw14vcLpxUp0E%0XZZiI%oxKSFPZI!uW&9r+ zKLPTOE(1W_;J@0#L>w>w@-4jFTqKAJ{X6}?GW^Z?KZAeOCqb<5Z{6|exVbxr`iAfj zmFjZN_grWYPcYsiI^(~@`j;GHmTvBDL2mv~RW{Hl^YZtAy4&B?>&RL1 z^8cTD`2RB7zhof7-{Tqr48;ck`m_WXe+E@Op)gM0u65C^0I z1waMB0=j?^U=COV_P{B?1q#*q0YN}G5Di=c5+N5n1Gom{0R=!YPyti{Pk^UKF`6zY z&N2kNfg(85P{ipo6#3c)_Mmt+5{hmy9L3lKKv7T{R0P#QUC;!y1nt2ypaPXz z$?lOoCmSJKB-7Ail3D9lp>VclunePlxdV@lr5C+DZf)uQ3+CE zsT`<6sM4q^sM@KfsCK9ss3oWksa>frQs1JkqaLANqamjepwXl`MH5AnOH)lVNb`l3 zoK}!lo7S230_`o@2HLl@TXYO`Qgr5Yesn2x_vm`)mgtf6g7muduJo7bi|E_w=NRA& zd<@zQE)4Mu#SEPcpU|XeA+#ad8+{dBi5@_&Gtx84GTJakFy3N(#yHIcXA)#GWb$Q7 zXL`ak#3;U0M477?VMk@n7FW9 z-ds6c&$*Vl8MxKCy|{C@UvRJRF!E^f`10J~>EKyE#&OKxSm?34#|Dn=@e1*t;Em&b z$oqj0#iz*U&X>d2#<$MT&2P#d&3~VNT!36aQNUB+x08pTWvFEIWG>3o%WTMs%DTzs%f6FikTaFLBG)GOOI|@fNWMybS%F{SjKXb& zw~A;*oMMXNOC^MomeNI~W~HCX^2#B~Pn5r^h^zRh+*4Un6;gFmEmoaV<5hE3yQ?;> z&ZB-touEFgaZJNmqflcO%ZGKvmSPt*g*Cl3A84*=NooaY)oT6FR@RQuZqb43=*W;ZCh(ur&~{)z?=v@(Pl$q<787|^TSrhHp_O}?zmly-79-GdoTNX2c(0IL#e}- zqpssM$4^djPKi!qCxuQ%oa{NpcFOnEv(q%EoljSvL7cHYbMMT)v$=Dj^VV6zv$xN# zyJ)*yclqp!bL@=){0_E`4R@XYo6;-&4C=e6N&;9cPT!^hmG z%;(V8*7u7p30{PEclGbq6$%#S7LgW(6)hCo6nB>>mQ<9om0m6VRpwtd zU2a+4QK3*#agXC(`hD2_(EG~|oE{8S>QvTOiC5iy$n@~)Bj8c!qm{>J9*;dSe$rO0 zSY1^kP;KlOY%U2k7M++ftu)~M20+a%Fc)_kn_)-%Ru>CaKmFF!wg z5%prLC7|U?t4Hfx+v&E6cDwf19Tpt}oyMInyY#x+yEVICyi|MH+@svn(5u+{^p*Up zx<0wS+J3qI+5!22x;t)n`lU9Syazj|Z#W_awx*t@q*Z>Qh6 zzFT_l`+j3QYNpB@n`q0?e5}U_&##~#xLPtjR(dDlZSqXhlecy9T{<3 z5)N%2!{|Ub9eCIQMdASv0ll=LM6?iy>CZ+8If?>esD*(s~8yFf{S)Z`6h0GflS2uSLPcQG_kkGL3h{&k;%L$2Bl9KUR**Vv8uivN!N`i91~_Kwc3?w38iBcre1jJ!vN@oHg8osXP$+78 zYADe&(-RXj2lG+l_$%=qB_U!G`#T*%g@BN0LV==SK%p2UxY@WR{y(R~570o0Jsbxp z;n1EF9h?r(0CweXW*gky?3>4tG;Ebpd%~~?nY2w%KsG?w4rvL`6N3%w#$pOv&{j`? z!mE;;nJHGDv|ldB65_ES9DOAoODbhR>Iv$X$j5RZqp*Qkno{>zZUZ$>(31ee1EKP< z%ynKEm~NM1`4Y_gs!%0h##UsNUqjtl>wMx;C&{~p%o9Pvled^w5YZ?JCZSSNbccFY zY<&$*vM?5f0shO$ORb!ay}-dG|FtAbjPUc<%+4&XR)(tvV)A&cdJC)*hM=hnpsi>x zyI(H>c$W3O0bx&|FvC}Yvs1#8s$=mPn!0V?%AgbWXvIyE(=Z!LVfqt!sxN+$U8T#!42(Y2jh5x}bB*HhLVcg+s^);o(|FCMhSV?#|wz zi2`{;Kbw$VdJ?)-dOr%ME$~lkOGW($gp*>>iD(>Ao_|i0ctqeEx+vIOa=9@M{)*C? ze-{U@&huX(6;}|vEQxP^3Bpn({Bg zVmXbJhmHjkwK-%3sXLZKS3Z`r6B-Y~HK2e#mPSojW;_-hWd*6wQx;f}!T+uhLqH~O zmEuTlU8&T^zQ;I^7ZS4CYIWGhVjp{|*-k1CF^R9;+pGA_;BG(t`dF$;!c8^Z1&e@c zf$|^2#B+NJj@RgF3>f4PA1a#AlMK5rvv4(bJMcu$R;fywxW(f^BQa*$%zWYE*^Jl? zQz2QPsiFp0*DS@szX|8GT+kH&l%cVY^IVs4rdLKayU{J7 zP(K(%C7>BA*p1ng`8Z^<&{5N_Z_cM&z2*LewR_>kFYo*4LZ`ygj+YpM(MuJ!!PoOQ z#V-bQ+83v|1_ao7eCNP<66QTEGz%~m*aofJ^=R^Ay}>4vH$G;jH4LJeGR3vg>wOeD zt+9$P38fBtU%XYPznf%D`N$hDIp=5!T0i+@xMW@$<5i5Jydj*g_462RCLl0-GB1?S zfugaLzQ|TeC2ME8cj3bGH>VcXM59Hc2R?@CTN&8+KI%p}4!PeA7o^OL(>FI6V)D$@ zXjH#;%DDba+snXi#YOJZM$Y$_obHV}wZpzV0{BHu@Vy+#iA+f}XN&MJ+EY;ilEiKR>^hZC=NIMprGzqQOQNU*;z_`QW2xtX9$5 zerxjXY-(IzgD$UDc9{dNT2i_vf_eP)#F+*egC)K*$-_Me$K`meA=5HQ;u=!H?|=cy zFAvKm@_dw@&fFq|ofS(c@!V*;G}utup3R@S@{T(Jbhj&vb=H+OKOEkbLD%4+<#rod zfz)%LbpSaMPh1_=btQwb)ZL%-&f{RKXxRB77z3UhS_chFSX&85YEof%)R2V5>l~z{ zg^4W+Ly>@vo6!XW3NJZ17@0 z6=xGSjP17a1Y46zhv4l|qZpWJ``FFfIC+hLt@0)Q>uM_^q{j78udP8r{Yrgn?60<) zScIS#8MI)N7ZVpQOfeJ6WSbx+j&2dA4(tTPiEACjcvdWW>-TdZxGZ3$?Hob z8rVIXd* z{)+N~yUXo-b693^)k&tYm6>5cbd=HToBiz5q`uyUOdA`ESa3OIdq750L~KNi(O3!q zv&_$k#KnG7Yc_4v5KCOtkeWU-O5=VqGLw&<<_3pfcl73i56Y)&^jZ0mGLG+4+%4v) zlf5gGVKNb^fpJ<_qp4J-SGE*<>t@sBe!Cs!Pq)7Mj$3mj#=9i7ztK_5H|(X%GXs^U z;e&IU#cwi$miE_aFMcZ3ko0EH@u}5GVZ^-`ij6c7V{MHjBFuq?HEj=BwTJv>(43AHr)P zyt1=!Zz1iT0V;CL**{0dB_nci-#h%|=ckJETxw}vta^mK2$wq3j8wwyIsQXH*(ncZ zDdV?b;P0+r|K0dbWld)F4pzfxbkeh4IeAlL@??Ri6kna0OuWqxi=ZO@M#j4dDH0M& zk(wV?yN#Y^>Px8(?6EKf^^M#tPX6=`CMb33>BCb$E7w#`1T!n+Cm4B}@oss?*mXAI z>cBM@KaZjBWv|Lf&N#K_jKH7 z6Okm+*}p{k`z*~_3_f`_Ecz>P6&&32!ppIrzxKuL)~d|-%L8vc^!*<_1Df4VQLh8! zC#wB?0ldJknJt$g_hji6{;`GP%Ly%Ofu@-{S8Jar4I+{Fc=q_{cjj8;`r%yZ?f~5SC}L4_Vn#HUAUDKiJGe?`mC}v6~CE_?_P(DCX{29y%te+ZcF(x6!SvPr*zP`LjOtKaY{>?{(@S zd^fUUynm)LTA@4?at{G-AZZq5tLhp@!M*G_Nf$qocJ`VT*LGmY!W%)mqHp_M(OZL> zZ_6q@*;TY-hwf7RxM0yelIDskwjjNw)+F@)?5zRMD}CJ+KX$FZ`cU+9?BDMms_4?c zyM^{NDPd)N@cNX&+mUbKj451r!rXT`%#&0@x~6-Ng*^SGy|qjkoXqJvdBoH$dNv0u z`5d)|$&6+|Bs@)5+Dg~1tX?6rLNWQxqxwbBv|yV&;H|{~HHmk-zw#pI^5k$2#W~R) z*_rg-G{%T+)?Z$JMUr=?g4Q~3aY}ppyMJ*2#zMjz1sGUMqv^G(qnr1vI5QG1)=(R2 zcpzgnQqmU3i}54I5gn4fTJRPgdhiPs7$W_{FTX&=v1){N8q+P5_9n&_h0coUH|=D7 z*IxU`kl=X71gR2XGtYZz|Ci5&d!Ljh`}4PjbkYWN-819XkZb7<^=B^6=WwKO2~Qed zqt*1HL`J;Q2y@9c5}T&Vpc%^Y_NR8~dN1%&38i6Wo+glrg5S16Z8rRfbip+OhA5-lTNoucn&riEPUNNtzXpk^OSBmp}8_;BN z2jVB>!Vte&I29-lLf|(!EF>)&lD^skb7%Bvjtuw#$e_+Sg9AsWa`Ibnu(Q`Qu|Uf#9`Fyu8<7}MdtEir zUMdwupb>2;Lu^|p3>7TH<1RtK*^|H*Q1&_p-3u9;(2JAIQswKkS z6;H02p4n?r8(3e#QOgskSrAX{NkO46OFCmf;|+KAl7_rPFPVPexRDFjYnb*o5gvnL zOP!j@QT|Y&C?JhrxGAv5UHs?fFA;~#?_|AIB^15(i44muB*ax#sfgp~Gs2*8#c@i-#a7N{BvwK;@_4w8q%K?UXaYO*9c@E5tiVz| zFnGYrliwtdyk8O=%ZlahgEeq^ogMW)abouUJ ze!I{Zg0G8JL-#YtOdH*mzBH1lHH<|F02=5!)mX4au~>bRXs)4e01HR`LIp?!8(_fq ze0ZG9SS$mY;5R=6qlr?^k02n0SvtYM9s+5uY)P;Qz=22QzXry`Mhy^)Wl@&c&sd-s z$s6Y@jx0kX;!?IDp)WxuAi@2*EO3RjL>6(mpM@YMyam~536OUurH5g>tU7{YLSbmF z5-5w05r*Ui&^=6e?W7!>NgV#}Z+{aSd0~l@w0J!$`FXh6>kzq zN4FC`)$b0$=Kbw#n_T&7_O-?-xpm|VnAPb(y_aD3;)}MM_$G!bV zqbp;rUyk#$q|Yw1pVr%hgR-4bYMXvNjyL?TYbMgS6mT#0o(Lb*AikU;0Zp9@zY8PP zugD_SG}0PMzo;$M+XW6p6whzywzaMPC?YKmf8>HETh$bwbY_t~pptIr3f5$i1!kp% ztP{&f;l)8ZpA)`Bs(n8|v!O~dNeInpUhwWluhIbveL%Sp~6uj0!9f>E)sxT)<3>Pv^U|WzCgTdGbV(E|dNbS?m=lAw-fs6Fr z?AY$PvDil&Ukx5S2%t7!{yEAYUMr|rYEXt!h_VtI%$m?MQ)Cp-C1W#fb%W<$zQ~uS zWh^mCADHfq^5nmEwna62qHNAIi?_cX5uiurO_C`h-N0ABq9%e!b36H1FUdS8NWK26 zrj6J+UDKaG&9y&o-W%)vf*+wouvlzP7j)OchN{_SD-YyBb?q)T7a4m8R~p40Q(u11T~# z@?KVFoa&1ly2NfLD{39bm#XlQisr|TA{iyN$0ArbZd+07*=0$tr5LtxOos^7UARUj zwA!aacKs00k~@aUeIk?c-FoWGQEEh4g=nsU$H2f zO3eIsEcMa!wS=a>1r|;@d^-)pf)dHNVPTL_c%u!kdpFq7;MUbVBiLL;4J0)=6hNy& z<3tk}wd*Z34HFo7hZMi+lB_7-(hP$fI|SqJ2wSm2xrKq#-4XmSE1k7KIg+;`V3lb=kFvtz%_WjXHPf&$;+n3V5dZStWAYkHoicnoW7$cMlo zEe%^9Dm1S%u71`Hjx4YIu%?-{vT|JIRhMF2Y$m1taMD?og-h4v%#)`$A19Pmr=8zC z{nHEOdwDO}4U?krL9n*wD`Lq;Mtrry?_xyrufi`stEeT7;=h%~ph1J;Ae#bO)e{et z>#odHi<#~Ro4HsV0{~?qBD@f(HV!DjN@hc>$~4w=r0!X^k2Col_<;_g9~C zJ@=bmEhztp-$cO{skqyZ!}LcDp85ypzGC>9wNnV)lgZOgP~sO*^-*mDh|uh@QOire)2>Twdja(7O>-aE;P{Dg5n zm0;Gsp!#c{Z`PH2>4NBs)8_IR$SZk|2;bk(jaw{8JwAJ~zJu*e>fo>NY7z6-Md_bT z#Zh}c3g(JFenH4FP@GG-gJgtVs$NOgMPj6&;CoUR?t)7Q>$SgOfzNicz>1OO@S=7xmX9-}rP}AFh zNGvP!enfFcdO`HYM8>dU;4a&o9Cu(&)JfU5#`evt9e1Yys;9c^I5I= z4r^5W2Rkn7wCN#*yVdz#fGk4;y5S=`IQ-&9#ia^p+>=aV2olP-uue|dhPqb*BUI<| zlr7HGvyqqbKradcLL(e6GoIn({cCs=O^CuVqeF|HV<;}k)Xx15cwkF~mYB}|uJ%A`pE_bMYG%9R738{5Q3 zcS+PVI~U+KlR1zRF!*_s7B3}gg6wO7#UzNlOj%{w>q$9K!%(~PzZ?;Kv&9*cYK#r0 zJ8P~YwC0jt31I}9Bo0rmsVhkgv1hEIaVOgFB*eHUSZ;y!M&o#n3|ZodLko{`)+Ol( zm5;jttq^KaRu}JpK_(~!oGJS=6hNOuqJfojrwj-PIJ)M@(EM}aK76w)e_L~R1mh>% z?F~3>v=*}rRsxC~8wMpcx=Sn}D<%O5 z-d#+V?T7cKE%xOt%<)oq()P&Kf3DSc{*VT&`Q)vOKYn8~*Y&nNbl~3VJiSudFiVm} z@DSa1ZcDF}u~AiLkMkA!MJr6#9Zr7f9i+7NEE=GA73rVSYp~YvAuSN!x6v2pI7qXc zTdy$c9cav~=$gh+)4xL6S8)hLph%(^g@sf@ijRrPciWH3iQZ~b5x$&l*YnBj22W82 z3pMs->x$xjIctd>m%s*NS)%OVs%W`eK}a%6mkjZKUd1}P0~`$U!Y}-aoXl5ms6MS= zyS`T8!yF0ImoDErC!P#nC*(#6xG%HSCN`C}-}g#$eOiC=We20;QVraYlbre*yBu`kZ7*1R9=ijqiVPd;;Z)*81-AU|E3Vt(AXcLNTF5VK+2&wZ% z9MN_?+H?`wMun10&3PHHRFISFg0zL@fhY_)0-cNs1>_;Fqo`Y#C$Ox!T?!g_f|aeM z`H>YA;AcZkL-I_+vDnEN0r0W5_3O$SKlw$QX2rxY!D|P2nM-R?woE zlJmCbquRX^;Tt(+(^OLLZnfr}c$0hM$L@_oAUU_{sWHVd?waz*$OvW+~9+$az1}~Z&vG)M~p(!7Q27 zY1e7xnU+o29`|3EEi zy4KY4mF{Sjoa47w=X?_lQNk{lmJpze0ms`r+H;jIR!PQ%J07@e+-qv>y8gvs8Zr#= zZ_A8qMRPVCHbe`!cGk|(>&`rrF?n=8!+KIISalX5R3ML~!(AE-o*>B!r{Ma~Gn)}S z23Z+o4&P%=#W;2xU~aXIyM4)vNxVa^_GNpKp+}66lVBHzM*O^y=1IYo`BHKDl&Qqi z^XuO&PqJI(ezmQBp7Q*5;KF2bM(nywU_Gs-jsNY1YjNF`FexXhf;28QqY2Xu`Ru?V z_J#L;VtrXUU&p0in!Q!K^jSBqo3N0*b{hwq4ux#?CB+f~99k|e=>o?%mpwtH5&~)d zP1zAsAW|q;V68^)}|U8CN0%HPN%PL88mrjYwq@L*j*Ruin7V&>ws*y zg{lcnz#N*%DN!d^AXlC;4KB>t%1R1F-e(dv-=3nWt%cs3>8VP^k8?Z}Jz2^Rf$l() z+$eD#p3UN8o<{@muT#ESMQm>*>ufBG*F?-RP?+uw(3p%2!Lgmnv8dmEm4Jfk5PTT{ z{8^`lzZ&NU5*;obYjTqOBCms-ZT2hAcoT%s)L2<~5>1o?IZ8N|=sO=RX%AEq&~_5g z_9Bt;&%T`0A3w)(u9o*Qv}7uFf`n(L0^nU9z7koE4ZI6_QPtvU3J&s_%Sw)=5f@cZ zW*o91i5@p}DwYKg$9A=mymw-S(K5&q<=Fb+OIu|)IL$K+xDtW%d%AZhAS(`$N3Sg# zb;YvD1PY~f#G{;hJ4E=G@7lPq66alN^`fi_ROkh z6>6^~Mqed!u`5!XC(V#iP+NkJ=$(Iir-n;yNMppg$(+_vxG%%#D%IH3f`SPj-FpMp zEdq`1gBS*<(P&NAL^kSI(#ozmPqG2;3NuntFNDQwy%tQr=V;Ws+q+O_77Cd78~sg5hw{{Umsb^VCC5z z-rn9hXdQhRN$+D{SBob)1!1#JC6FV%bwrrJmAMQCX7c0IHuVQ zd{m-~|1fdC;FL0IwYa`6Q9jqpQqH{DK9-`uDIbH$Z65eBn|Rsr%Q(~LiR9BS>%wjE zV_wziVrJG=QFD)%^T->{j-f{nFsI1c?yW?*21{}sJV~>@)O~l*8@5;#abGjNBZ{*tcchsz>=j2vY7f5&Ck@XJ zjy;=mm!kunXJQ0FX3TOR z5?cAdwXodWG5O*5@AV$Fqb{e?I-k%7K&P*lqm7>Pt-ofQulUq+#XeKr*N{$RFYtDp zp6&RJbm^UI_ot?InvB*fFZ)QH%!&*OrprI8F%$n<^It}S8w zUXavF#`jL1dCwouVrnna-SU*}moyWfO?|)LE|T-zO$;qTTW~$!+<|gnSD4;2%kze^ za-t0f6hca`jQr6yQBV`1$C7ngS=4^9F5w%wWG5flZj^*@a(*%~`~%xAswKKuwp6qI z;g-hbs-iQwn)gp13+CH@<>RmD_5S?r_1G;9Bh)ANuE(=4XqMtl3N`5(M1M?{FD?)jZ0N?{2{XO+UZYs;-~Llp`&t`5~kJ&SCXtB z#M)qsxm_F^>#m=BU~lLWV9%r(_Y$qH5^HZJGxVbrvE!CFLy|l2WXrx-`(MXRX+}&;(!?KU;WtI`o`e)FGPQ{#oZPw59vhN8(DbOA=2-R-w#)wu_|ClbkMo6|nP%6kYD zCl3x zWsY}`H!D-Wm`QcmplG>nrSp!*PrCb&hN`nB|DJobx`Mr7;Pg(_9GF@%B1Ph%?p=eu z;43VtfB)l@y;ttTm~xX0$1Vlgv=#~VXNLf4dh_&^DQZu#ha*$QIdt9p!hy2cI=6D@ zYB}*KVRQJikI%I-)7-o0y_3loQ5ov^Sp42g`D$GbL)wh^AtjS^U$*jm(GAkt@(&I> z`X+HcDy*hWA5QiS6=#)EKfZULa4^`L0JIMBpFNPL< zoS;yd2-?$X$WpSA$9!VBH-Dd6Pnk9bE1^QWW47D%xyxa`(WytSWMBU2<3^6S4%u^u z03jvqP1nk};NzLoYz7~xM=+`Vw!=yB||D$jqr`{Wi!N zHi^o;EZ2|qw#nzM%bfewt2g#trwPA?1(BJ9oAZL*Ugz(5*G;L{jg;1RCYCv4E(J(m zr_DD|FEI;T)lbdPR8M#iUj2H;s(XDBPX5ad*Q+ly!@i>m{f7zb_?`R@zVrDuk}EEP z>|_3#nLkB^e5DS7&%63~k0IYp_KIlpProh;PYJM6o=0#RGmLU9+(=zld7Bq4(A3np zu2%CjB_OMxvMU1J9$?h=N8?IHSsr!|2wp6{E$#`wsE*cz8>3O~CN2X*(b-G?G*?vHoL99yVK-ho+nSK9NiLp%7Uw5Hr3c6+i1h)sZin? zD}H>U+y6fLnWW9EY3onLE=~2=b(p!|DA%s3{F_=Gx(_T?gSc=|aJb$zbfIK)bN^KXJp~Di{ir5x@0{ zWl!^Z{l@|QW0+ZBsgMhN8>LSv%ihKtRGl?~!gfbrc|rc}@6UGreDMR_u0uh&obW9I zEPgBl!C5YvgV`onVk2a6beg(S30#{%sA9>WQt7cyeFvxX-dI3VSZ3AIX72ISt;X1Q zF^pJZQ3h+!2O?iE_k?hTv9B=|7z}~t@a?W1UYlt0U4lMSlR)E`nJWd!mo*KqAPbDX z`!wAjWIt_K;+4e`;u9_$9L(D8WKc#$FS!)E-rcah0?Hi%scXk2K8-{(c>Jmer9kI~ z`Q>IjEi&BKiIRx^$in-VuwYR;OLxSE{P*P~2pAaS#U3F8-UNAPJG z-{0Fi1Q^R-y!EXju{0O{PRCee&qrGUT9NJeVix5-MEJDV9k4&ino|1Im^W*ae?B=$ z+L$lgH?Rc&-=TmdnqEmouACKZ_GZka8NjNMpd72JZ|L}b->`IN%GEW-`m-tQH_1wICk&8@xz12NT zU6;8($V?x<^_2DFBg8wFis{Ybhs)gWWGrVe91NsVxI_FEMk;ic*>FV%V4&!1OF8tGv2@nZs8lz0tK zmcS2yYhRdlV^VBw*)B_!-HD(z=9zCKw8eh`6<7PgzQ_~L4$>nxDB9=R>9rNpO>Ro0 z@Y9N~=UYDJl>C77-FsWUJT9MJ9C^j4I&P>h+`u{2DsW2?E=ykBFLxRhF&kX9yluJM zHRyD0KFgjY?u1^79xYGrJc3((t$D6C>PIb;(2buf6UG~FxPef+?zblxS!*leEvq@MohO0O?v6)?Dx42hPTRx^TZMIx|&ClX{>c@B{ z7MX+jA0={Qa2;QqLQufICu2ViwS%35=lVqtJ1Tjl9238|%a{RDZNyx7rT-|cg#qk3 z^xcpZm~c(gL(!r~y{(~uZH1;ZLfhtf(rT8dp%M2sewDj|K&b(Ag(`k+b_wkp@Axvz zu+Q|EKBIh-gzBSJ-e*;nveHb#@~&(2RUgly!KybLBiak{^7-v@i&M%^(pG-xq}i`c zr!A~j+E-R_Q8mqZ*c?;e=mlgxn<)85F?~EG8hfQxVSCa(BHLE;{V9V|_(qnz(1tA8 z*GE30hBa@*W<5hi=Pr)AG#hi|@8O-z>(DsTDq(A?b~PC+Rh3Mua^l_cac`>?nVWfP zX5U!80q;318BeglV-Y}D$O|JZOECczusp`}Hv8j5TXJA%^+2W`o4J^CIM)sbswvJZ zcI`EY2d3pxc^5V7vpWu$uO7eiV@$a1B+uT!{b6^DmwaDC)RMk7ZD4<<{F1SHG8VEU zcT!nX{8=wKpL3Vy7H86uzKfH8Vi9&D;SjizebrP;rhNRNYji^Wi7%u^b!vvPh%3>i zc+mo+)A+!>9@XtfciL5=-w#vJY{v>AA6L?FEkBKVc==&v$ky?oIHeiQH^rC6PMq%c z$GGTIsC`VT6&KOCQup8x@N-NSdw34pf*%lnz^&MQQ_L(DoV*<}SGJz;rLl7D@vK6k z<9Wx5V|F&ym@hm@$e^-p0+Af1c$Q!sR%1>R@d+=tUc~gW0o08BK#V z=X{DJ`q>X-2F19SFEwqfhT;@EgY#*Iiq@ZBL0;M4%C}Q6*bAtaZV0~29U5fdKr$bS z_UkFtZ1YT5iw>Ms$^LQ1Sk$6BR4?ZcSZdXee=j%|Z5~vnQfq$v;8!b4vE%WbkESt> zZOSDn#?Nh@h9}3krZ1X!JpaT}_2foX;A!fw7d}o`ip;0~JVOgVXcf`87P`@nCOdYC zN%j<;fZ^PMUAq#D-#J(I(JP-N?W5(T%7s1CS5FkCz|yniyJ`7vKKPxL!NV4lmbh5o zp52JaXsngrZdBd9-1eEL@?}$y1sZ-E2ntfukr3Ycp@EW*1nt zztqNl@72vBU%c<}v?K~4nlMN?dce99Z!zlFo`0;-K=zio;zYZDN6mMGzBc=G4q+vV z&zp-MVmZsHU2@i~8{2n+#<^*AHcmUNJJ8YfzI}q}EoHM1@+PzFDPz>9K&Ow{fQ$ph|*=eAZ3W;%*bsx7UXBs!q5#w zywHInKz`i#l=h&V-k8PydSh>a zXvJU=+zE0$ukTTG=DkfMhq1m7HO88&c&NDJXxswxmHi2X-&2V?;Db zT#oanFj~`CMi3|%JIbzB$?0_`Z1y~yTMZT7lY@4#LMNF6C(&AEj|pO7uRGk`5Q)@E zoRJk+XP*lKqq3w4$H!*IqpvY-e=|*e7W|0gf|^g}Mq)v^&vUU6*;I+2Uj(b;W_Jeg z99@gP7M~nHwZrD}o9CY42D)`NY_BwHHprgu$kSnsNVBx&Ue#-n+}DjA05dZzvAnj74M_R3T_=y5ndHEr)OoZjy5BkTsI;qTJ9k81%-Jh{`TbcdAc?wr+ zsA2-0gOe`=Ip1I@fl0%&(&J!R(lJYPBcQ!QRYQ@d7e`~~7bYL!@k9@b`aSnpxs_-A zZ)&gAT)!SvEU8m6rLTCxVluxigJIe_pr3Mi0sEPEw}MjbJdUL1zq4V9G-=IiyTRsN zn#GEQ;#1ZxwIIJZx&vLrxde;{ny<)U>84-bsV9)_bP9KQf>205g$)Yq!|}q@Go5He zyyEZufMOLwQI;pgW8pnBkyt;~@=id3!#(RNqFol+=z_jxL7m~pk=((2HaJVGJhEA8 z$)L1sMi5J!jmMK$bLzrB5$)Y#XUM9yf=qJ_=no6@ii%Gp6F(Y;4UfgS#^uK1=?RUE zb&u<^w!i5nl36#WEu#J(p3X9=%`RHEfgmkfDDD&}R@^$s%Xr?&It5T&D+cjRbxDd9GGe>?x{b6_gfz?^SW9Tnl+;*n?XZxiz~TH|*pM#S}!4&OZY!7sYO=+E-h}4l>VU?mx}qFpb&} zX^LYMmeqRCd6i@(4gWTk`~z6!qBnNfdimQ?(1FZvTj(}FB9;+F?0aeAHC?NRa0F8+ zu9?<*xk^KR;zRouWJAEV&5ccM&3+puN+~0~nsaOk4+pnewx{UQm6@_9kH_!I45jNn z6ZR;fEUc5IU{ih*ZakrR|BW-DU5+bH7O|x&jHBn@Fosuug$V18~sKiU00P{xyUUSz8eY^dVCs-NXUGRclm4 zp>z?R@A|aBuWn0)91Tc5{tJ#&$>TeAa4p9Yzj%e^OHw!9l~p8uAiZ}vMmwMFkJtv; zST6?HrWpy%i^<^?sSs6s8ax}|Vy)J`7ws-7ig_7Rqf(1HW;wdY%#kc31R3+0ACt|0 z2&OI&VkvJ!>m^&RF}QnaWo>1RbR$jm$UUrpH8DeEka3TR=M1MpJcH4_R3%94RyFYL z^9;(Ge0EnS=MH<+EME;4CL-YI*-_Ul;*Hw5>Zx>Ki>z_AMK1a6Y6PwjB|t#RaAC`T zBj<7O#re|T8e&PTp|K)qkDv3MH5=sdL&%A{5 zR^4LdQe_sg1Ed3LLM?|+9cn@wXpen)_tkW{bIAZOQ3`oE;5n zbMu^b^}T!tmO>XMy-~}h912P2j+PcrFrM|_H|r?(P<$$>(IKZsOlXxY!;=SBX3*B$ z#`K`6@|_UNJBrW}EpIQMB*gD)abF%9GQ{>RyoDu3tU+sd;I|%xgW`tWOAh${Df2{% z`uq@oniql}KXX$wF6n}R0FNX}W4Hm7OALF8en@*`dswFlT3Fp&)A^FJo=9}6C{WKW zIQ&bH%xxT*&$9;|l}jnZ%*|L+1-Q@RA_6EonCY11`z1k*n3_CUh=NPKE%YYZdXG&cY6C~=(}>V$xrR0^T*r%Y}7lfkIC5ur5v zFZe%I#)Ebo+#qeF;oXF-*I&bJ;e%|Jm31YX3ZcVhoe|n;Nf^AuyF7`Q=xdV`XJcxKvxz@Mi!wnQ9D&vLCh^5H=u|_iC%D z(40W9y6m*&qooyx=Pdq=+N}bJC_#D#wk?`X7+Mc&qhlP`t!+$z%T8mgYgWS6r(5la zHGrYIt@y+rPoxT$@475ymf7!qvv&k#tlowrj>8a(BJ`>71qRR$)?PTj$|_-Bk^aoe{ zQ!z$8b@unTAB3t=Ft+A%8vVd!DL>*x2m6xl=!6lku>cTHw4^oY(ufpkXp%ZpPD#gT zA|8GKJT&{L5tEPi9^%KXAF53?$JhkV!T7&U^sEi#M z7>mhzI~mkhKhzoo{sjLqo2PbRRAD>R>L5`#CZvNAaIqF}FTugv`k>wabBV)S;?j7N zkzf;dRfV_^xJ|uPef<8VBUQ4e^*4iN?OkdDVVZTfEz6$L&fv$BFQ0lp58EkX!N(MQ z5c@<4lm6C(+%|p4jWelXDl0V>aJ(_#2Ox&MIIAKzFr%N7MQEe zty|dVzt6X&D<)cuA25f%Sq&*5oTC_1XW?MSnjCf>^{*vwg*sQ83sQGm29}aG7jF#0 zJ~Y4%weP^&4G_Wf5*bCp$4Vk8T0D<@U+yN?U0dH;l{*yr{M;>GWFa_bYl|`7$k0mH zFz6snd4zY?EBO6*DCMh5II(Q4DjcR@VUt^GaOc-hodsu93dy!P!}_0|k(py`jxIW% zjOnU`;j$+cXOX@UmA3eAb(ScGE!URJphjnK)_yJJOt-YrNM^u*?hKD=e(PxydyB)Z z+((hhK<=23byqiQ8fq=n31SL_k31!9NhFX)km6eLPkVFjG`_iHqf!B65D@5?&0)3V zG7SEclcZpyW`X66z0ggU{~sxYoW>|9xra+VpYaijwFoxu&1eCjl_Wl}#6pWHVsG!< z^~=)U7IIG_X;7xT(>vLG;j$C`1^JkMlgM=dZwSQ!#Rd_9LAdjjj68)z_iyr`4S^y| znL0GCBpv@@MUL1VYdve%{w1zQ{p%|V<_%b8uVaOg(U%Co#MOIn$yuHDQ>bRAyvvDS z3g@V$`N#bahNN`K{0Ij6TaC3|sJoWyDIGYzrme;)8FO1}BHn9X+TP6Z`Imf{+1lh~ z#<`{WK56op>Rch?Ur7-wAkM00Q;YISToYd$Ov(W?E{ksi401%7w-dL&Nn*idI~P7B zJS=JRMQbrJI+<@!)hslHpy5f!dH2;+3$r5~C1`qaEH!$DEV46LS6^ltFhZnN=g?eu zJEN2L@5rLA^?G55lyLb~7W#6S8m_UYdnjFSp%`2OWD7SL5>A!;gT2??#uYUW;NdvK zaRHkgamdg1j=`X?v^BQhwD@K5BIReo%ebt&@?k6Afte)vm43<4W(@u2FL-E8cBKCa z%t`X4^_Q-#=KJI)O@Zpk)K6nzs-X#kIufG_);4^r&vDEGlC!DJI9ka)s~j{GZ9#Fv zJU@8%9njDu#9`ZZv^rC?rpxm*k8vX_Et=>0z>fQ34Xa_IK(>xnatD|l&dcm;y*{Bp zem15l?6k&2fP0F~!@PkLVLqLm0I#xL+u)micU!JZ!06zbunH0+l+ZSx_^%%0mXZB$mEJ}#k8;3=r zvExLvgpxzhw0td(aryI{)s6JKT#ImMW@n7_8#fl}$}a&ho!sV# zP00exI@B8Ogg|vdpfm`Ldl;UJ2Ra-0Unb=LjT#EK661%ygjaA>U_b}QQw6!8DmL>e zzBAA{m&IMFbHT15XN1n2-LJ;1S-!~s^vFo4Mu3AH3|rI@%t(VA4&u&(O|AXvhZ#|z zp$wvc>X=M|ivjVX$53Q{M9Ugz5@cfFTH`y|-eI1BeOMORtzj&vVn$eOo5C^E@rj0P z-@!!BQCFy6IPl^63C<)%QIEj2@4m^3zLRuy(H`{n_zT&^V zz=gy%YkZlW>z0{v%tB|lR%OC?WWnfx8SB@+V-;=|Ry#H2E$^=_Nd%%&dvEHyADv|B zKQ>ycT#(DlOj30XAcxK6sDj4zkM@LIT~J+p=o{M>6;ez4KJ#1qqAYN~))jY~!pC;d z`QPWM@%#y2Bgw2dUUE1!p7iuwSJh7|GtDuHkgM@57gG4$1VImlRvChM86{0{2{S}S zWE1T1(SC_hv?L{dby`dxyg255Z5u8W(z$_M*nF(yHR_+7cQO$*$>yzB7Xt`XZ?!4% zCyp`@oLw5$bhN9Q6%s_FC?tgC(6}jL=$vA>lOzd(m1k)H=)K=@St21}1#MT;<#7K1 zr~3`*+Q&-EpRd{UT z=+Zhza<3h}i#Ce+<9bk#z(r@ldZ1Z0UJWg3J*3(fTJn0?OP5G}JMs_9_@atct^yjR zlT>0hN6z#iS!hHRl8jWC!wz~f&;h^0nV42qWWSL)8@;VUxHQWPnf+3wQ zD8?W18YNH%3YEU<=>PIFryu|B(J}FXFPt3O=xML^e^ys6+Sm}awJo(&=loOt>{+Cd zX>*-r`@^^cv8RhwbF9>_mO&$%YcQje>`p!14EX*|D^%?v0NZvHI-tkxcep&H($o0x zs}%}wx#TW&q4~A-)}&v55$dQ0-{h*-8@?6Syd;k}4&V2{J(Ui!eQX+>W89ROBX3FJ zVOg<3QxK?JWf327oVe@49eJTj0Iw;}Szqi;ugv3vjuLzxDIW#S7HNU$N3T9KFz*dg zwoh;aFBx}8-=vSM?XOihp#YgYiRwd!snE;ETCob?jBk_ASMkR?c|rY0COlvETH&1> z8erC}vk-=UXJidnRQ^MSa?0+{jlt{z01C{A!0mjyj5;u^>&bc7b7Iw*rXXef)GM8; zAw$+-KP#c{1Kj=*cg_5dTVwM_)=PMYQWUmf;d5_PT;L$TjOmpN1G^BWg*tC${64tK zxWkpY2aVS--R!NkQHAZiG5SN(M8J%Jb6&?A~)`n{8%HDXp znwM}`%BD(Xs5IVpQVqF@p>5=O^TAnZWCuzK6~Vj+BQ_`G#l=Q?ox6ycMlrI?X=rAlD4?AJ~EtHh4|IK7Cp;-FmRtoW0km?G{#qPv|q}U{0B;P35 zRF%vdFECf%V;Nc?yDBVxGY%c(ba8D6CNXwL!bo#fwTcsH7-HrubSoh&h}ZEbM|@Vt z;2KbbXYaPcAvH1HTGAvFdPWTf^3_N25!^0Z9f?txcP#e*1Gru+>_MfNJk&0|*C9?< z@FMai4h)(B`zWu0oTf#hS)yU|&_?lDE7XqZW!KMA3mY3!0QVB# z&OtQ54m@!ZjCN>*u;a{v_%(X)QTiL@ugw*O3))}VlM>VpvS}tTAxr!h6tP-=R`;TS z6!0h_QCTKAfjNiq>a^e%+-1&k!9wc?o-w|HTe#M~ol5{P{#o^Q8SAqnuEZ=9Ilu5- zolD|sL3d{PxgnR}d^U?V&}3~qUxubWD2O`8ugH(fo0h~*mX9&2B#TcTL;w8BoUsav-z{OzfSu%F0H5q}gPTyEa zKCBe;0glw8sq`sj{C)oBXe?(-8z4W_I1V=!yu}R{*y=xBy@*C7pnu6MTxb>~J#pR-m(pII z3`+Ok3q&(3!3F!BD9-{L&gAnD_kX5?(D?7Br~p8$B*6JC^dW@4BCOuqSj5@~p@>e& z-A)Wbxr``4)>|qCdej|E-`-Yk75u{cS990iPaO2Jnbd|0ERY|F_e114Q3&oF2jDUb zJ?`HVCy%kE=A54|*u=JUQWwO0464_#`7s3_DULPN1Vq4UlvG7hxBU%q2zv`W6g?I< zGm-g>Uaav0#lDtOH_1`?KW~T%jpXY>7*1kxN+MCew1n2YxkBJtdlop(cyN%z$yTZ2 zH6|n%^D%8`b3)qCk)%68(rG6(Nues%4FY3}-(?8%dvUTCtA~6pf$GEAY=y8ge7y z`>&^*j17Z>V|8j7>YoaYo%1bt4~C+aAkz>&HvDBGhsUtZ4C)!;3 zvu*;{$|Tlpi>pr(Oq=R%M^hUk!-2G%8451W2O@IPXU*UUD+4!?6Kn7}$=S zA_+ox&JqyZD_1D0nR))o_IH!~F1oEfH+1(S|EYR&sB?WYn&vFHvn&iEa($!VEso{I zX>vyLzZr0i9BsJN$Anr_GUw*VS+47y)=;~N)Wf@`m>>qa9Yv|~E=a@v2N+FWnMEY& z=APX(#A&3SH|ZVi4jaOwZpmW95R$BEmYV;;GM+zsuZE9!#SRdsbd;d-UlKEn8x;jU z#n8tm_7Sk#Eb~&x7wKRsyyD)>)?LmK4Q^jivBRt^V|~3Ij`Q_KZEgb0Okun-j1q?o zf8`P^bwIa(9kJ92BH@`uDOgqkJ5>gVgw@UtqzLubI1=VQYRuyBzJNq&Ovys_F-F<_ zEbBlYRJMgxV4d6Q^nRHem{>}sbWJ-zo__Q3knkKWL}a*?7XL^=pP>6A9m2p zDP>%?XcM5e1?`S2J7An|_P@HOM+P9H1cK%_#oYZ5;{+{4X!hsv3Tx*x+#r6WSxANz zFguCzh*;zWS~kK(u=hnLP!ABQ`vF9S^TQ*>q46y&(6WR93)`Te2=TM@1?b}?t=k1i z=i_KG|12({xKhjKY6iG-U#R8&HOaW^ODI|r8a#Acx32B?7`3ePw4ifDh;J;S}9i7`}67`LiiRb>39|^H4oQ^(-HGyKf3| z6A?ZARVDd5%`}>0RE#-v3~{2G;zk4{D(k%?_a~iBOE=2Qh%@dhIZUU_^akg}fmd9K z92Z?`1)oCgsiw1ScOyZNGmZGd5V5#nBlEa%Qu?t2(6EG~6r;xn;}e~k!29hdDK9>= z3g&hCk$O`e_@9a?;yit$2H@c72o17a!*uMo7PQO|mtV({`PheNRj1tyOFtLxvcfD02sBTm`tWcM{xQ)teo6*Xj2&JhO!_7&r)j1HQ2Y zYf>6SRGd>%efC3#JF14ZvOI|l2(C##3)eiUtZiB9W9sXj;i?q5JS2R`;7N<`Ug0JX|oCWRAR)%PiY}#x3WN)ai;G==wU$G|+g&pGp?v zz)qd>eobW<3BQCy&>Dw+)`Yw=P?Ru3mzmsYv5f`cv-j&oF3 ze*wjcoQR*;hh2OCEp66x-VVhH7E+z zc7PW>OynQHa&?41K8mt+{hQa(e!Shp3_rob_$JK_+j)BQ{Tj7&m~gPU`F%dwDq^jF{zE)joWxLMoE=4DQGhQ3Dl$c7Z~n zS_{_?nU0lie98?CDv!u`8Q97RCc!}wfJ=ui8iSbc3LQ_ul`B()ntEE=8z(no#;%V1 zR<|j7DkgyQiDA_?VoUwrDQ92TYPSR_s^(CYw(vaZq{8G7tf;o{qZ7a==vUXP>!#hm z^?7Mt2uowCOoNz{yZVs1YxvclqOt8`cFejz929@4&GUz19jQ2^c*hs{vLzIK&gBdj z(HR>Dhs`Z1KipapbJIUF1kR#mSDtdJ8X=>^&*l{PwH+#(4^ zvFT4SIah8pJ(C5HkSz8m^|7ky45M_jlFLLj_kB z!w$qO`;0pUK=0mvmiB)w{g#%LHRr6g;Ipuu7KsE20Vr3_aYRBE_q493|3Nc_*EF(f zt!g5=ruTqB_)_}dde%Mz(P-O`EW^`s2JbZTK<6wmTv;D5V$<^tGjEyVn&M@PniYI*~mLhsfI2ow>ws zl=8Wvr%PW}37QItg60}1eN1eT(THW<4I*xP;PLPN0X|u55jI1$i-jsRoaiH4mi!cv zp&d-$PpW+*%2EY0{m;a2$fZLx&lhj&j?lrgiOGAC7w6tU1N9>Z@&R-`z7Roh?f!kR z#e4m;ZuhzNqI#7)?1WWkZK4qu7h}9^bp~&s97qgKG+wkn(^EFSrebQUMLV&pV6{jI zkWCqxBAPB5ZbJl85n(JBlRM;8PucVO##C`Ae%5ny8Aotyd|JI^LZ%xmb|Zsz0FecF z3+M63^k7&ItemJ9!G3G9Zw5*GJbj8S^oN;_XZ&)IW8*J%~k2zS{FNcz+@Y-<l6E>FlXygBV!Z8YGY?NdXkc+xFveSX=!o#^`-NrpE$68Oi_}vTL z(F|OmwW6a)gTt2Qt8=Zyl?{QW#pYDHI`_!HAoN=zShoKrD^4VEZQWh(o9#V0(wVPXhm>dz zbt4!bDhhw~UMyD|xuNSU9=3NkC5m^1n=Mi&Xhw=3^iM@qdaZI(AOi05Va_PlIwyu- zmWoX)SHSw#yqxJS)rY6_BoDX3Foq}&@g|OK?GMAW5hJAr%aN;31?a%1oJW!6d)bZ- zHfy-0!vn9A3~6MWo44XIIkdefm0ubU9*0i3_)q%UO&O;HN@2zI?1@$B(Gc;|=@=L# z0jVJzWMVFh+fDS5Ik(8ci6Zb)?88TE5(sh|j3`TVvh)uMK0zy3cSaKaP zj$cgn9nq>y2G$?bDhT*1#b-N7ch`$%yQ#nz*pr$uYd4ydmb8UqWVL+8#xcS#R|@ne zgzNMS5{M0pEmbSuIv7wzT z-)@f84%aWZS!v=|Sp~46E;CmDA=exI(-*s!pAD6o7{fRx#X26=nQre*+bHv`HFkkw zPyWX7hP8ceX}z+qAjR2Exhx;-?OGW|Tz;}+#oaF66AmFnt&P<-OdZZ#p5!zNTl^T^ z>PC$I0MJR4dZ)7e#&f0Kb#B0$u+2${C{@u*@@DCJNw$nA<~p}m2kdv3xt173z#aBo zqWEW8330kjVIbluO{>n$ACeQ5H<4L}s!j$^OIC)_3<=BtBs$Iw>uCiD8u!bxWSJXc zf9fq^Y#l@OfbWE-1Six3+E$Lw9rhEnZNF-X;Abikt+BL)FS7isooUvN^N6V}w%!>` z+rdCheZX^nSL@nwH52|#m0)#yX6hqlFBC9~}=#&jW__eT|T;u-SBYcVc@&7M#n z{PmJ>ThDLI@(vX!H=jO|HP|TsvgtiYdofWr-Xp~lg$hWX-g?9GB=760Ev=IsVpLcV zbxrjH`o`NFcj9YdosyDhPsj{@|BH1<$+vT>?CR1k82S5&VV4Ft(r&2Z`^gQbxGZ4(Z)wD{z*siq$KvEcSnA1JTQn&Zmt6^K7=g^Tl1djZ%b%Xj%+fk zKq8!zT#GG`NSSe~AB+I2HyD~)``?iBe?)P*m||CZV^gZ%)kUfEHJp7Is%B%rfP)_} zi~Dya@R5<0mZJn~H!pBme(Ix^Ur zHFU5tzFyW68Yacs>p(r3Mp@+GNYzk<&P)}xRpRN4G2%%BSmB>8pt0Ot>)sb7grV)Q z!e7_e&B30)A>woK>dSBv?a89cphxtUL;SO8UiDPu9%73nbpq0GFE0CXEfT}lZTP0~ z`oF^llypSSTSbm*$&q!J$)iv7@0i7NL4pk8xXNuBE6lELe$R5jKDvKnxCOu(A$9Cp6S+OACf0wDa%Tx=3ht7`>_tIFQZdZ$f-V@SKg7&u3B9MLXTd* z8JVif<(IOW{xl*>=neH6Wo)-H^dSNQ{0GQqT{Z;~$vh3HBR+QHE-o(y+N?l_ z4C}?v7Y}QLxOf@M@%{m7pvL4K0>{dJ3>4akebUfE(R>weF2=1nLkk;af@v0%GbLvrCJrw0P{*_qTAqnA8 z?x|MqzRC^0C(jm@ZA^RVq*J^pp~zEHJCG6*&DdSv-43>o9k1^;wK2vWdq}JW;!>xm*i)&u zsYix>Om)h#y}S_G|04{vZ?jT4oQXd2=eW1lzYIS7ByE z??mI9X|TpOkG>kAzAfg)>+GP6=;-R9M*HrW?cjkdGn#P?W@7aG66IwAEdVsw>}%?Y zRQ~1(x(q;Bx>boUGV8nImjD~pj>e{*7Ejrg1UHlaxj*cyMwsY8OrPv`q{p_4v}!z( zb$do5PMHp01%jt~{{d3$pjT!>BIw+a6y1ip?JB!+gHO6ydY;p%&l4b0g7 z;yXg~D^no58=hfs7Ge`c;RS8At^j(!2$fxCYUR2# zLw^*Wv-T>tYo${`{6sz-i&5Ya-uKT3zB?LNA8rOn>WJDIGYe!NEKeAlbU7_kMO4Ts z59RiKCqxPz(Eb6co%J|{u42y6?WARORXk-!*RvQ7s>&$!c<~KS-cKG!*$AOyP^-HS zjic;?$iyMpeEm|SXAGthUWuoB_JW-+>IHA|bt%4R(Ry=m2`3KTrM9}W^^K0``uA3f zyV98H<-6O+C(Ws`5O>o=rA4M6Tx({1w1f_r97JED9*!0lb6~P-_&*oQ?KgPGkV&82 z)Cd)$6xL2r%eZ21D?1L;(zNOlam8Rb-7mbU;(i9VC|AbkW~}W7sg#dgMfbf|W+;YD z*1IX3_Ike_g-o-&FWyw3c(W?*f;e5xh}*CUyz-dxMh@hZP5i@(itSq4iVAj(6R+`E z&IC!@NN-;$sgJLC-Yw|!I~UIXtgXqmJ~kEgyD3Pmlc}`rt%i+s;bIJ(*30k4V#!0o z4H6*zFKhIkA;BmK+MQGpb>b7O%r(y zPY#uJqn64R+tThoxYUCDn4j*!d{3MPod$rhoHT3b#jo|A{{iawWg-nLsPT-$Y@oNW zlsKN9z`t9S+fA$le4?X}knsqRWG*yljGu%}3Z3Y#1R9v`N3^d5y#2v+FTf=mUc+|Pe7x8pqQ;&^QrvjJ~`uN(dIK% z`qxIHfRyLg8_zqfJ3eh~OG2l{`e?%tGHi+{cvaAP#-3ma ztGN^w8^NaGHC30o!3DT1THh*2XmBL(P5Z{k?6c647tB3SyF*gH(wgL0uC77ci^)1V z?AM_ZC;$;e>kpTj^G&uA8;3JNm=gTZthvIkLfZ2Sdw}L*l)4prZXTF3dr$1D0`6&U z>^WcYgXN#B??}*KCYOMo3eF>7@lfmTYiRUHV4A4Efdc2RYdj{fn(0^A&FT#g;~4tn zSSnehd-lLmw{BbZEW?FEC}uG(O_{J_#X_B>b`;oXEi2Q|(YmC?=3TZa?aDgn5T=h* zR^*>uk!SV~uq;A%sPC^?W!ovd`<&*U(jJxifhBfh)8|@ofimj#$JW<*XG;M!QoJVz zA;HrRZ;_Qcdl#I~qrjH5(B!u=gb}xiep`sQqFB3xYE1vpWG*#)!Fcy9vtz3Gl=YM6 z;F?glwft~8qb}O2fqk=imYwSC%?F`ixQ$x(oj>Rs!j8nn^_mmTtUBxurJka4 ziDgWbynbjXjYhgUdEp36K9I6@m8Xo1vQ`qQp#8IreJZ5ID7{A@G=M6quwdgR=$bn-?tPV4lVjSt9&^3X6ITFT4* zV6q*j`-L@Ufce~y1%CV$XMF@y zhpG9prY;#S8^jtcuZ^!eZh{Wpzk z_{E^Vxr$VUh?n7)`O#eHb`g@Z?>>`t5>e9;*jWLC0OU{1AMYWNQP0gIQuBBHac^P^ zHeM;engX_*qBrvs=CZieLce8(!;x?yy(rnS$m|{RzxJiZ+r-LUHK#JvMA}U1onGV=%bcXxNEn`#gjDm5g5R*o1EB#DJk zUizLxr$!(17~Q)oA+Nfg;t5P>U@qQMUBsZup#za>|5)a!#=V@14|`>`QWsF;Z9EfE8eg9lW>f6x21#X^IVg>0P!KkN!RN zyQDh!_fb+oH^#tj^NwGy0sJKo#~m|qSsNFB%aDg{c(s?PaoLm17}ZqJ63hghq;=P9 zRn$j(q`Wx5DgENL!mi`tD=G5I@k4phQ{cc_PePljj6V94$Fz2L!@EUocvQmL! zX2Q_5P7;C0;W)5`)f1~~i{)0buX|cAc~CGC58r53AqLHluY z@$#XL8@d)qiXDdhp@`TVaLQAB2u-~0I`L1eHeb?u$o!gGW_wdaw;DapThDB2YT5^{ z0ZXst$BoLhyA%m#+q78FIb&o0N#!k}$W3VlFjehU&T_EN_4osofLyjnmpU1z9Cv9m z4i~6$q6AmBZ7=D9et>*FjNLu&X;u1z0y@alp?4QG^EDOg}KypQ?pg4|TW*ZRx8D9Z|sf zIlB)*k128HP`T6|O%bYFj|`Zn>(-ILK;N>8bM&7@D_y7tC@sE;mx9%mRwln@9C~3# zY~iK{B(^wO!_ggee4=;lVx~D&OVrnt2E$J_+dRRVyF5n*d|dC+bU#_>A^Lp^sI^8G zwk1NP7*`hgT78bpAaaRPdws>7v*iuWBsK8OdMZcW?jj#Lv(P5VDl7zD4 zYhlPN0)F^k%g*3@jkf}X4z{4Jf^CE}gw#s#%T_j8dGXjGR9ZjY2J1xa6#^t7^f6Zg z8wBbD)~qi(JTb`F$w3ynVZ&}vJ$w-cdNhX%Zk>gi(sqA>L?W59rc$AwZ)a(oIqb28 zFbI%4>IeYTJ@09b5jhRo$^Cz4Mpi#p|4N_J9 zh>d-xf&)r7u7JDPFfJdG7(!wP!Vfzc5Cl}~qwyK6sHmaM*Z5vMfPqV`>eWQ=mA-dc zAi*gv$hgGy?H}Oj!Mru6lnY8Gl2p6pS>{mxW(1x7V%4UF($|`@pT6(5+ z7i6YqzUESCkl|B_v?-1+0jno%a=dz9O$I$hi|o^n7<9agh>x&ws4?7-|Np@OE`aa# z&?A(f-FbqiP>uO}YoO#@b2tZISAB}oc3Uib?~DSP9t__r+OiuZN=!FTgk%F>6w;A zTKv2OdQHiwH(8QG^r^z*d1n9Xn~lvqu=Bl?KOMTr;ETGirKPQ@HF0r~B^C>fm>AxF zJ9577urpX|jP&S5V+DJ^c0Qiy_=dd+ds@2N?rx(ONLOr8NTFnnz|q+)a1+IS?5}kc zA4obM&r%P58WC%~T1v^GaIbNe9VMO2IE7qlh{Q?!j-lyR%rSvCBm~!g9}4$q_rV0Z$Vw@Eo@(5% z_;0~UWXBS_twcvzp0nz-7yke>uY)L8Y#YlnT;OWA$3c&jXy3Hbt2cF2o9lxgI_&w? zCIFIr-17T^zHx8#4vyg6Y7NVF)RZc2Fhu3-?{{|0?=(ix`+i#3nyNbN6FO0 z)oo6l0ht3iY<&E`aevZNR7y>>7QA}dRco0HDqHkrid0le1A?88mD^fKYTW)ih35A8 z7B4YbL5T-d1?Iz^@Y(vnl~44oCsOK?7=Huzun0d7n)>}=C^GH2o-e$sS?4(e=@QXD zc@DF_)k46=q<6zhP(dXhp=QU7eb}z&Drj>9wY{XLLZ|#R@{5QNcf9s}&D}iy@ahv{ zzgoiIycMd+y`mPmWlY~~6&#M0TM#FQseEb+3VUZ}+MeHsdR8n3S;+) z-t3n6_elA#WnAQLkH68r`NO2QuBJ+kRDzmcgJt*FG3e`J$usDyZR`rHS6+> z^W^W7bV9`d)0ROYIAHMai>w!lOQB+C5t9fJ3R(!tagY7W+MJ>I z7v8c>3HQ=ZFc1r-A3aR#M-Z|$v4W9rhbc2@CV;+cW2?lEn{AI+P4qoG*Hs(GhM|U#d{?;^+^y#?ftcN|+I0(lVbN}F;fsyIa+QfEol)f?GbPd9h zKn=%YiocUk3TJRT1Y;cjKJpZ??Wx}gU1l97-b`v`wkbDe&Kb z7kZe@I4{Ydp?N_J|tg&*B$2 z*8XlK!B2J}YBgOWq@tCB`^N=M1(ZinN;e4h03?{#apHtd7j6kW{Ua`2T`du!btm4m zJz7S@@(<7xEsXqd*~g(rS`iZEm;+;%o4jQ_cXwma+KbP_DBIe|m>YsyQCm7=r+eLY z5*H9c2b#63F}?L9(4BqV?WqP_w`!D?{sCI^XkL`Gd9!gPe$849B}aVvxJel!D@~tM zbLsw5V657=nia1x$xn;t0)+1|T~~TcV71mF^cK$G%=3r}hwIB(mp~iZAqbEsg=)b&{IGrH@mXRgTP*5~m_WNL2*jBo= z40=>WjQuUB)5$XsW_hl!G6B91ufY^vlra6M%@npMl^gK?ELM8UzW9p>_b(e%zz zdvGXZuak~5z)~`eN3B~1PxWh@Y=q=pRJs#J=S^`TcaJ&0I{hij3&leR{c+8ZaQtgLO`&%$*tR_2BwQXrgEErN0BYr=86Sl>J zsjA~CHIDUjvPp)$$AV=1@FME%kNN>ON3v&`f}cpFdj5Z&e4e?Jz|SmI-2P5G zQ^-Ls|T_?ASotxg7r(#qo{P z+46vmOzO-`?)k%!*b`;$o0qJ;CA)9!z@0x>=6?dR8k|^6oJEgIoX~!oP?OwfiB~iCM-^Nvt@PZYqDnh_4X?Dhx=25m(5_28OMP(T&l&jxMmf0q=%oK~FnrXX z)7}3nZL@`a2j0JnbZSgz268Y3U!T|ToDxJlK70eM^#Ax=Vz#Cz;_Q{>%_EWs6{DXp z*h(i<2_=5U{BT^Yy2)9GT}Gu z1*&WJm@8U2+L9aFAJ(;xN4Uuo?!`!=ckf38S}bp;MXyA!Jff;>RdtfYm@z471K=dD zvs{b4s=uPVzfG1 zLi;12bq(-V6N`%X__RopFMwYQT@mTIg;n0Oi$Zw zbP6YwU|=)ZAmcwEro=F(x3`ipo~3GJSXn6KzE5+5z4j%|oICcOfzV^6nM08gNnWk* z?kJ5dbE}DzwZzDmdRWY}wXVXQHj2}w({IFRj0l(cleR~d&%VBnrL=xSNQt~ji6t48 zA4N#<%;9->Y>XJ+e^^+?BWCV3mkwQbmzzhRL3dR_(na06Hlew^dk*cmfM|`ktnLhN zp)_UjIrLiDP^2{P+PAm&WUx7NGsEl4>nN?gm%+zU!3^TG;edL+!|W~oN3u*;lS{jz z#8C@Xnj=4(r3PZnSJQEk7L7V8tNue^Dx$G#8I<6%3ZlsMr#j@pu!c5~gYV_lfsZ3? zW9qMzsyui?}TVQ)h&1zhm9WEt;{~#o&l^?c0db$?l$aN z4qE5z{yIkIiQ=#vGzht)06fMmPoEDoF;X@M%H~>?SvY;DJ8b(Z(A2w`v@NpG8E6?8 ziu!^%@T%|B&SGY|};NY=5Q#XH_{gGB|NIzyJt2 zjV3E}vlUH}N)2lDa|fP91=WFPjZ?*s@FzL2UC}RR;6c87OsAJHj0rCwWHU1`#S=JF zfTBqqMn{fTV}IdcJ~G;GpvCOgj{Fh+Mw4g^@?Yu}v}SpMka-55k|u!LyI(kK9KZ+F zpEo=38BpB}@6mFA+>lb_VLJ+dS~5_>rDOUFw~b}mwb;Q@!G$=B(hOsfe#s&XBw}@U z4c$(PgsXe=f9WM#3?X)%Z>8`1(VOHMm^|57DbMfbTHXIulFyl|nchyIwK3W{j?xaj zBCb-lu&%wAFu0x zQ_o!u@mo3rG@*{RudFUNa=L=!7#pKLDB25by3#5a>$VFjN((*7v3WDxMMGR5!VuY5~$E7{h9 zT>`FpmTN*R{Z+sy=X|d9&tToiLo$-560E8(KAp+wYPS4U`_9_=9UA~DAdTQwR8d`# zyQ_sHib)lA3It33XlXY_Ckw~EdD4K7JcH9$bDkx0c|L8SG8bG7AY2!$zxnh_t&7&aglVWBqOuUYCsCUsmx6ig0uiR z5<|t7sVtu}$wkM1^;l*R7nO^P|1BcAjn2sxz;0?_jK*4Xd@hu3KKN5+5E}2wrQgMZ zo8^5`uo1UaZaQD~I&!c(xyBVB&E*mL53o^7s%A~Z_OWm`D+3=`j$T#CJeqJKOIF?N zW!p%C0Ea9-kk3?~>bu=fZVl^orb2G!X;!9Y|_k~Zt z6g^!TTbm86L#B_!zy0<)jwaS%))YJ&AsIfYKs()VKmRpYS;752Pq3<6_O%3$XjT#c z1q(RlMZ+1QP8~1(l1JVYK#I5Cy$r)OWkkRX2WyzJuBVMcZbx>B^;=%Jm@F9r15HC~!L9t9ZOa6SkN<{vu zIM$60)xm$)?`1)tTkI@(94W~n#D6j^tKNib8V2mHbEeafC7e&h}bx-7K$X zw(7vLo?k1QPl`yz*>qhccJa-B_ptIVS{!1OuIJwtmEHkj=ZZjn9ysZX-SXahX5hWF zP+*$4;C%frCFL?8a$wCOmSQIiLFfOL%U_b_(H>uAL&!s-v#=$}zVE70zF@`4D*qK_ z9=^rEKo0J-v9f)osZ@)iAcoDgC^r~0=%u5XG2kmMSy-~%3`pc;xc5NNX=2n|wtO>@ z9bT+@h5VXO$}yQ{2$!G@%pLx`_l3+~M@q9Ks6svQGl;%V48P@5i!TvSvDaJ&gIN|F z5mG^(D`Un{oDsHGIfY#E-2=?|)%FiW!PJhwlg*XIKMjeX3u$%>+zb&SSIec;;}AKJ{n|BnGgOj(WA+;2LH+e zZ3%y)vCJXMI{w(jWiw+`N?D9B-IV`bl1(?QHVeI4;v2IKLb;$d>imbTA9()(4f<%s zE4{`?-p&<%Uz36ik5r12b+TNoCq%a(2G!HFp^0H54#7QE>xh7pocwEb7)YN)ILX_b z>`$q*PdxT6A673G{_?u@yLsh}Aer1U@8CYLUM$OhDo@Y;?)WQZ9{pb##-FH|?u^u& zIJWGf1LApe^&pXhF}RZ-c2;vwRTdI!M>=t@U+Im}hTz*5SWABnyj(T@bx4&D%y&=4P%48b4UZe1_vcOtLzTX?t;XYAqgJ1TX?6V{ zn9@H;Jz)qF6_y}p`Um0{v|(XuZo_Ai7&ZkHAZLFJpG=bVooHpRv*hE^-toF2=F@7w z2jX}du)F_GPy6msE<*X=@(1RSQ&=CRz~;3R>Db*xr*8b; zEmLIO{QT~%VcL&dpCZv88@SeSIth}{5c9~(k9|J@?83c7`@dJNE5nk&)Rz=-)pPK%l&Ff_z#faAleo04>+B8U=_dVPws)ja@ch5Zr zi;(u<{|`_-g^ZYdU`pS?6Vl33Fm_dp2@cK>yw^A#lD zVW_yTE4awX^-;(nu5H%6N{v#%u1ih;LGSM4ZL=}T7ur=n_@8>_^3A9`{KgJvzW+U! zAI&!$Bc83iHutUxd*RR}XLJp9EIO$i5@v}T_YNQFTfHc3Zc6uRfxM5FV0*l&>_QxK zocBWb=K3J25cqRc(#e)OkiP@#U*>I8iZ?rH>I~#d2>fGC!1)Oudl7^Zp`X)a^B(Y ze3ESGf37BNx<^K3Ml3jI#O4Wm!^MIAAWU92>2K8U`Eh^lLK#`sHtiC_400I-}Jp=>Z}0Vscy%q04z`mYH4vMp*CwVGcXm^;o(ah zkHAP>->847r4E@gsB&RT&QJs-@@!~y&?XSLPKK0w2%@PJ9Zoi3lteiJQ>dP;G5ih? z(kLfd&(0cGBgEAkC&STd#ON;?sA6bsOwZ%vh2!sPOr9d1hof7kfjTZKP`*^Fi}=mK zqNmTj{NzYtJ1JcX#d2PxKcZ^*G?nNB5y!h${9Z=YZYE>{OJT@t=&N49OG#M`W=pw4 zsQbxnlNo1#JNEoA=R%ZE_;@;_ zj!{!S59MrD!!H?u<_!xK3N<1%N9AjY8B5e70_M=$~g^E{pkXuOPKg}Sa~>GMvryq5cH{I(IV zVDQzng{_h{2mdm;hxwn_4e|557?LSyxl0>$U6x-z#OnMVQcw%l2?3?LJa8R>SO-hV zaj3&T>N}q!C*bk#tmsHkJ(&k1)@0?k_00VRf=F%u1Gs4HurAPb43tbyUFhW6b>_eP zHIPk*os>v?^weZkO>2`Whjl_(h6aSwq0zugO?rI+I)#(65rI<$=YJ5CUZM6{*6AXMm zTq)X$PKPBO4d03O11Fp@It*7UIS5IT;Fk8Yr3GuR!{NlC6-(_!_^0=)f}9_WOUhYn zCkb1(KOhMKNdx}jNlf0K^SFV~e#W!;oM`MNutC5l;B}~~*k#cK*jz?XZa9eN`Ld4* zr;u26P054ZHwji0#gC-gr?l!ckn3^ul%+oi94%1u8pN=t0hUBTHC~OHrTfR7-}FR{ zH=%1H)Rg%z55HjIhMn2yTpvCR_^ln9<30+~WcGUau{*KxaLbf1rU;9UccgWEAB!C9$6^s65ya#IDjo_9b;NJFRy=m{#Z7Y#vaBOjoNn2$zVw z2Lic1okN$Bk2P=>z#Q!RCRp1?)2YI#b;Cv19KTF%q?Sn+wE!N+R1pqv9m5cWcsXtF zfwF0VE4>;QLZF{A4;F&rgx_YsJk>O!aatSoG5grzjf1a1y8ze(1wp@4In(<9f}gV; zQV&4wz__N%IG%@20E$n2;bMGJ8Z>3`=7q>~_J9auEAW{g9NwTki7 zo}z`#W{<264`*<<#$%~_W9WugIcsM8OZ zP#HusRKKd;p?2dFqM%sBnkSgv%rGTxBVS;9u0+qT1m&kzX#M-?hdAu+RNdJMud}=n zy}ntUWIA~zT9{2QjQ4Z<(*q)$K0<6tV59Z;DbWxOlf;YHxN-B}g2ZnPmflU9BwD$Y zQ9JlOC;XI$VW8V}n4B@+^1d5Qq1|^P_IyP6d%5U@VSh4-DB{Q1t%urzOW5FKBZmp7 z!)iO$Y>(Ctdz8F2M!RrU$g%0GKPVSczSh*-f?KOpMze*a^&b#coKF>wZt>#TyA)Tk zPmUOnGHR2XtSNFN>~n$#;&U>#DJ}0kxhVl*J8q0IKayQnA>ugNZ?k2%UT5lh)gC)6 z2%4A##>UcpneItdSy|?2&gX69x)Qfc4$y09DH{kjZC(hfFs;hus@uXA4V7Nx7!ep~G59bLo|3?>?uw#iUH=y8mHbz~&!d7SfUCWhKy&Qi)^Mtuq-XXoLWzRGLuDn0 zHpBXluy2*4yS{-UfS2se1El(YOU*LRHl4MUKSyiL8g-O_9#&<(bP;}L6WjUr#!eMZ_ zCCM@0gf+I$>>3+AWKsP18L~5ZD|G@Egxq{!FIG)fJyP`YU~LuX4c$unGtL#8k+$%( zq46$hs0IB5rX|}*N{;@l)pAU@`KL+7qPxc!kcCr6no;p>YiN<^eJnshEFrrdlTr#F z7R9@a+4&5MB7xHX-jJ(ukhj6BS4s@kK!mg0UW`?d)0)=RC9$ZQ`Ie@W`XdN`TZ^+p zrT-c>>S_ZV;cB*tS)2Ab)8=m{In>dUq!e*+l2ma5xQb2)#-H{7T2B4J&i$KXRHBHT znRb^!Ar@q4V6e0OtC9S>ho1lIGy`CnM6z&C1Dmr@7?P2@>sLj&&N;dz6Lv)(uWPi0 zl1S)h7;e#(chyXqF}7)q{hb)bb?&y#yt1UD6`q7Z&jHHpAkk=oV%~S07*WJ3n}dev zGcFUuZC%UQk5FweU^}G9yjw2ZHnlZp>yS@t7il!O!4Tg*kdK*R_gDCHMscdj6NJsU zeeQ5=k07O`aoYFRO3g52w$b)ET~3pE{_m-M2Z8v*!NKnODr(CebV7s({DXTyqRO!J zH}RJ=Wa7ushBO8y(#fCI3?2^_ZnIC{zBcq`<0xpw2@YVX%oJ2zJw1-~((2|gp3c~& zz~PCDNLWK`=6%Y{$SFG`<+QtS0~6H9C_hX+E)Bxf*o5Pp z6pQ{V(UwC>*RZzr+SMfWYg|v-b=q~Mhd*fUa`+`Eoy*5<9mZTo)&P_xCpD3^ydAY{ zm*5iTwN*PT)^o>u6~FNac5M00^6@4UEO{rEk!f`Cu3edkQjYB|{F5pH;FZzN^#d&f&z# zE!Dl%uWd6>vHsE=CE8BPI!Fmi3h3tmvB|G~W0~w-@5K;=?9r`lCZe1n62ACzx6seNGp$oSeqosd$HRmyYY7s7Pw-T{zl&8qEEws-?U9(+&^!@~ zWBq4?Dl4xr>|!fI1GKW}R&OWGZel!m7#_Cy^I>nYlE+GmRf_Yu29wo!+e?WK4?-!3 zvL4_Yu9bES36rrTdm$(eM$h_=Z|lD)4qpsuMIZzR?w6+Ps)m*7;=&xh1w0tweNdKP zfddF2D`z_$S7L-$B^)y;gaF+>!%cj>jDX>k?W)D6{v{L;~wo^M$)F1|2aTMSS7sf1c+ zx<~&eop|=-NVnH(N@UOVPeOk= z6!pRNAz<<7kt%{fcR4`-4xGsEX~1$ELsF|j)v_7Op|h9&*&vfjm5rd3z{Tz-8I$*P z7d4-egw{`~=OB?>%lGkaRpK<`=)RXHraioepvckUJ(4RvkH!E*AlhWeHX?n{kYxY= z2pwjkRvxAeEht=C7E*6U!gc>|8}q*{{r^)7BSBX>8d$&uf1eb|XnUf0$Zfs zQ%+LfUSS<_hvx&GtZ3~tSOb%pi0AYmBL3e-hEY5Zqr`O`Nfd#{$O5{U{+$xNa5)s* z%9D{r+YloKJrLstX=n+*l{4NhjgwKTH;!u+=p71U=Bv4j!@FGI#Be>J5LN|nWl;g= zdA@WAVc$F+%JV#r5e!~L$nq!ivXn~Ri8k2y?E{>wz-*LTXf#D7Nyt6T$YMp{TQD@* z_wmpAzs)l|zu+WYV`>$&cY2x)T%S}8Sk5+my3Wrp3i|$vnnbH?McKVSQii15hE3KV zxh;zrQarDhe)Z+A=hs|Qo^HM5rFIW+Ue#6qNI3Jwseb%y!{%S5Z{bBe<3t=Rp5L=j zK}rG!Ny0jHt@zITE1El1WyqV@_CKvIkFxAV$X;3ut9&_o<*iximm~;-LZxqm;|yQV zpAyC6*Gt5^OJf2Xa(V)aX`ZSdD+pgS=4cLQCx%uq<@8^Jvnll+TD_N=nLhE(%vIga zvv!F?IC5J#QW#eQHNwVPNlKk?<55oR>oDa+}WYn{Za8nyIvDolBJ|mLcp}Y&?s?;m_K4Um(l!1{XAL@0D8aORd zHzYIO7Q|+JV`-VcrG0@}X^R69{Pdhmo>{4j?%>bcY{vqnh>;?TPZaT^iJx@swtTG4 zvT`=QZau4VDQtO>LLl(vZ*IrH(zrb5=~6D5mv z=;Ukru;`=&3^oTyC+chYhk+%}pb@beHM9-3kmu{kEEJ2(wA~cU zr{x@vLMuIDmC!afLzDy2wtSmIPDBAr1T-YnYy8=lY)*{%=sZ42{e}^UFDQ)=<_a~& z*Vfkpmmvb4z5@sBW23>9#mh(uPJ}_>I9maHRLf6n^=z0!3&mwu17|XD+R%YRL88zd zUxHWX6#&U(nM5_CM76fgX;79}T+#0M(5IEsm53E;>#+bkIVOTvAI{<7&a^M_Jc57H zam_rq*yxiH(ckz^DN)n*E~z}C|HK6rzz%kDmS=E3>J@QGu+DSbYkHpellQm}Hhj1K z==GS%tbR_=WALV!$G-$kbz=oq&W&G8I>j+$_IwWS<|)P8Gq|o zG-(NU&s3>2U{sLzF63gQJ3hYHmmLIvZ;aM$P$Nsmo}dW5a+@u|Xxh%_(Jw&b>?kWr(_gcA z&r%D!QEQrycT$kRY`zY|#-n}s=nZ4Fm^D_tT&n*f$NW51zkD5cr1;pi1{;#3d^fPTjAHXi3pDq3^(a{FP`(S)|9T8?Ltxs2U6J;VIG^E6dWEr0p~7eZeJ2rjHITng}J4N20)QDeeR(dR_GR&yVjtIa*K zqU{){oGq>@mfkt;Gk9o(%&hJtnu8oJ^2_Uok+~*nZ;L2qMfZ-t^u*l%0J=Q;NH@AsN94dEZ2J>tYAd)aAop1?iv8u5n6^@u{U$JLF$go$Rw zsT_lu*}xPqICj8-7Kn59odDgr#Yixo^gxXK1ZCwMu-PQ1ta3syEw+ZBEAu~&Uj4Y% z@=l(0HRs9eAwT#3fKfHzIPhGgJGiIm1SPNH%FNv1=Z8JF#|rqRSIb5D;_irsQ<=2^#ny=BXL9CDxwA;A$~i2TFDzy@-@i^C z1N$t^Nn~;-tcl3GIKVHU`q5wef#xu=47|=s)K{AhFfWIZN*KH`je)vweg{^f_jt{y zzQuhq4sNvkGxZ1g6%@T3bEi&8JFh)AmB&MsLel;NzLUH0>7x4SouP-(zEo^v|aV;wM2CP(h-+ zvMJ2H09mW!z}BsZRtM678E|z+W%l5Vl*OQs#SMN7#K0zt4-iC$!ux0&?q1P;rr;kB z?S823lPu*md1l8&2ex(2?oz)zJ`PaTT@mui7o6V2=pTcp;BTnzWax$s=EcN{u+Jj5 znM+sc9Po7jy?YKSBPxG-Qv!3HXO&D>rCe8ehP3L@Xx*xaBSogI4F+#^ISm7_%I1%T zY#|1r8B|Y;cPY}Lk+(9Da2;9ts9H6%kOnP)4CFP)D3O<+DT5{n5eVJkx&=K zGBn6eNw;)SF>NkW>M3H+j@x?YrNzt^%Eyh1lAIhTXHbbu-cpYk>8m4Su9i zKC6{3A3nnB&SoMv6p=+Qr=`8P*>a^nn8IH@D}8^MagCx-`lUy=7|A;yGIR`r=zmug zG&CxrD0FS&;Fe(e00`m?d7(LZen>YWe9yNm<-oj~SO(n>QoWF#xkbQV>p)BlKX1ps zF4HbHSymAFRrX`N5N&TU@ps^@Uml(*=l{X`5Twl0JjETES`pr_Gkpv{FnI3LAAO*0 zSZM7O<fN5wwE$%b`L6pzRkb`B*XsEI5^)n0`JhkdFnD@9Uw&asYAYj zuMTh*4hIUO)d1EEafX0x%W|p@FF_fQ>Rkm%-xa6bTDIPi`C;_vOSw=gOQ*fL%Z%(; z$+&l+&adAQP{$&N18HdJN9*2D+TOaoYpmduaByKV9MCvKABSlo7|u8WJ@5+u&kFn6 zh^WL_GHxthB%8_vYO-Lus7mYtbslQ>Si;l_8kCsgK1G%;BecgM5{OYC`G0? z&g_C&q#Qx_RvJ$>_3s##(?=>t`mP*8O@v}_{-G>dFb3!3-w!Hb&w`uJ^;N!Op zM-;~DJ{fgM%iiqM{{Rz;jjT{uRp8@h@ngj!!PK~WRH@U)&)%m?^6cC!uPC+Gx!=!p z(8|R9$hZn+AtS$QZ2raT`ZtjYpLEQgLblPEKw-V@>!a(WxxQQb#e4W(;wCMH1fEZ9&ejW)M021Ii(Icl+^LBB6Y3EG6dC2_PR9IXDTZPeYN7E-@)T};l&HBDnW z{3b5UCL$P{JcMf(vT_F7yE(7$=%%p@8C8mDM3*^lB#w3qf zSADi3{a)Ag4SORxi>cZ-)>T4IUorHG874;K_hQp=*@QUCtLu5|)u@+RUrDr{TQ(wr zgsK7a(e&!7{o-t;D(J!4$t}t4Iu6`en*nS~{a52yx6#EO2POsnDneVwUZ{OeNL$1XO=r(3ltJV@ z#s?)_Qi6|{wZ6tZYm0VSw)-6VTQn=n(pN5=PoE{3C(_V5Z?nc!WrK)8_vSW3LXjoW zLKR01%Go8OEU{ja`pAVFML|DYRLp|SHK+oKkV;VA%1EYbB;khtWL-8%PY;40GT%7` zm>P7WWNr>ZA@tcMMYcax4mA_=1(SzC9rrXi`IjSZqI3gT|Kx$WdJRmbYpD_!{T8cS zLB>uLP<#i$BP|r4mGgW}` zQd$D=$Z4|Rc!^)2@#?*m!1PYrdDnUjQx#k-Ng1ytn`d!C-uiG?yQNfKC&rNq4+R$- zG=~mvNKh6G@{$=xud0$~4NuhkedBTPq;e}V&B#pxu$RjQDWQW4Ti=P$A|B18H&+J< zX{!5A)Y`yc5eG`)%8BH>TjWGMT{beGp0Rf_OW-Ie@}fo2@jDv(6{djB6Yq7D4!%Rd z4`_y@?n_7p7Toi_g1R!t;OT*$bqt_`K?aOKX|_?iSdI=`6hCgK;>aMS3)$1UmXz9E zd(k{lnGq>FGDt`C+X+Ramg9UumvUG}QVc(pm|v-N3_p$l`09D<2pLe2nid+()5VDx zc@{x2A`NiPW8IX?lpAB6XsdbXFQg*f6tr}gCWfJ(C#SZ6;$lnU4>7c@JM7uwh>nl=8WP#=5uWxJz{B53RVUe#aN(a^+~ zM#yecT>f>K$|-Gr$0R%Pr|f#h8{4?ZXWr7~bQcU>vNChk9U}M16$9=QrZFX`J+#p3 zIFeNMP=C7(x{g_+{y5nklgY+T%o&rMzq%+$9UOGn(WiJyMK3OyFYkLF9@bvqC5pO2vjCI-0W+do_yD+{qp+d zGVxu1^UNi_TAf7qF1~~eSM7D!Dy`%sM`aU*?bBx}E{7qBgEU-~4%h`<_KOFmFLO5t z2%SE6P8<5}T}ojDq;WVBQRzWs{DUXkT%cV>f%mh8>vC|#t*Wa9s>wjQB{rBKaiee^to9- zO*-1QH03r3{w#{JP@*LdS(>3$P&t*2tXcMt>Dj|LmrPLbv6s5hV2SgML?N&?0W6u_ z))728h0}0Uv9p)#XSc&q#U-I`D(C>_V;;03dOiP7m$w6b>NnPL-s;aq{7TbB`o}WE zm&6WLx|Te$Y*CI&xHqyKrZ3XY{{wJ-s+Kt=edh^&>B&sQ!`oDp+P2U+;Pu-PT?`C+D=MtNE-vL#@q?bB$mupiR*Jd#C9AZbO?jJ%O^Z0cKeW`U zVeA^(g_X-7LHCq?+f>nYvI~9GCXrdcZIa`}Sa^6fWqig@InGoifO10^QkF~XE!q|# zXI`+3&M~ zEq*1AMPkw`^Rr(I zdeOJA{K@C_PMxzcivf`WT(1_d$slpemo0`t=9_wqgVo=sXGI7K>6=JOn>*l~F1xF4LG_fo*D!Etfv(nW%2ugQ?-$?v zOWo1)1Y4JLc2e!GVC&4aoNT@p8ZUX$eBK=nJED@{&Rq;Zbq$;MU(i+4iBosd`91Ht zdsn1Nfqv=*b=ny>d-pQh$;ee?m58RgmV$@Rmo=|cfdTH1D1OK)6WH_EfhgTEYHVw2wOBM6`v_69}T86u~R3Ye@#4&f$}T2M)JrsJt>o zLGMgB=bmENQ&yTO6~8AN%cw?Sa{0MvHBj#`PN0UInBlVC;F<=WVh&`fF^w$J?QCI8t02a`he0->OcFq&8@5 zvgM*xKz402e~e9YP2|y*;0rCzwr-vn#!gZ`-fZvWojWAg`<#1?xf*t2>c_UD$$OFd z_4Z~x{{Bk$Zp3QQU#y4h72Tq8y%G&3A=~kf*y$zB4=a7k)N#$Sr7x++$LOzT@{9H? zUCdN~bE2ASU2h0}P0>>DK<6g0ePXkL+~AB*@0&WfS zw3oTZiFQPa_2kb=Bslr>K|S~${#^sVZ>Pz#RB`4oMmDojd+HJG0&Ab;?an56+gb63 zOot>Vo6jx2(FV~p^oBYX3=iBUqTnQav*(H2@n>pip=;9^dsJgDL+IZhwFwDmzZheQ zU&G{>=?e`+Ve(^YI`Z8xiCtwiOKCbT>5M!U7M%ANMFXBnkm>Scjavi$HjY*6{Gp3a ztx=v_j3ZJZjLhOCI_Rl%nt*una%SrWyi4abSoZN4DuE&h^an)f6an!Ze~@xT_8|tj zG|{hT38JDh()sDL^f6&h3A8yLnt9lq$lDn&;P#~^O&6yO=#>j5#n*CgYy;@Lm5pzB zegQl2`n#OE&xBB=ozxJp>@^t6 zpWSG4qp`vyxVyyZTD!+EE7I^s(^sl!x>O88GFBhEU3?|HjtGJ(es+GuD`Ir`QMb%7 z#}RWYf_JrSp_OM|#-pD*WW|VfZN&7>N$d;x9Lt{?o=|o8HI6H9(VBVLs$jyiT3~Q2 zkPl`9cD)?I>Vw8F{j$g@G5u-Sl>7)yx7Bo7VHyuF{EG1@N1=6%WkDj>ntE(VG|sdd z)&P7Mteene>4oGt5JsXnjjco96;`1m(WjBt={yJl^?&_5uMh@wld#y{0bC_HeRV94!n+hEv5ZAAZ|&VRP7|XW*2>lL?)1u-yhx)#`d2$WcoZ(X=`u zX7HJ+DJ9AAb3rWcR{6rtg5uwLf~#Y13>oOD&t1fhom zs&kDDg!?Q=G3a1B-dZu2&AJzT@$_;>&ImAa6ecuhM6C;_YE$^&%9&a!IF~tI4bu&8 zb56&GQC9>roA=N49>bnXeEciQ!D6lY+GSi;yTi@_+PE-3iWhKwg3&>LhBtFzCWrXQ zhLFQn7>5Prh~_A3P^|^8k)N{gX1*XlJ^W6k>diH$Z{es(78eS}*H-m@@*8xtj&CZl(H@ z-XkyPf9}stJb$9)tCk<3>;d{lKrrPp?rzK9ufI?r=e2%ZkmG^EADfz4_1&K=;(o7l zcKG&n@*3;k`lYXbKP0!DsX_Hiri6LPm6DVvU#&F?p%Vvikh{lP$q(f1nGb?*)d^e) zUd0;#uPB1{y5F7ql)J|!^eXJY&Ry;@8?A17KM+5v@{p=2u5)=g$J^=puDjaq3#-3o z@K*89vEuCR2UWf7u0w{Vrj}-PDTgF+W}O(~GOSPHIB<`6r}eLEw*GcsXITIK<&FDy zj%UL#9~NgG7m~B;V(sx*sx|U;jCWPdbDA@D;$vibWp_&ugz>NOHzHPfyD@oBE!Kbf zoyyX4Gn2GXKpjSbZvw_){kNN&j~?Uh<6?i?%fX7AOUJ(FdIj_S^eNxK5+9(&;kB+FvMXx`=7Jsl1bm>6tuA}nR8C%Aa zf~?+h({_yvt<}Xw0r;S5lp8Tdst8ES|*ccaFIOXUym*t)`U?O6chJK z6oeM=1yiYE0}govAQpw`BSrorU5*kF_kz@L%W4Jv(^mo(qVf5Nh3cXXX#sws-kd*! z{_?*C2Wm?#>*k079u>e-brok#IsEt{OM8R5rZH@6-em|i5tMmde8uA){F&49eS^n_ z6sIc{7>rY|G%sIRc{z|sNkcE_+k;D&N4z{Uy|8FzYF6A-q>8JHEPJ2-46zH)=xsxj zCAFhOys|z21Ae>LZqI44`ZE`5^#YoMe zNYmM-6Oi(&5>4wLQ53FS9M~RX`qnA9$#FLVs4g^A`vh@~jxP#)>Ty{FU3K~l& z5wioxkbTuOodIlt65rM-{1xKGpj2?Tp#DCc_x;WX@n7UncK-#TxNo21U)f9RpeYOM zE7kYx?{07n%cIk+qMC-ycl}Jw-XN-m!ujn4ktupDZ^YDKEIW$i^ zHAz(X>9A1Tle3NybDq8d5-5Bwh8{;3J$vj|=;XWKsTyu~&*=B><*%k)Q+L{V`(F1+ z3wwsEaVz#|!a_DBZ`#nl$egM?_j#FtYw?{=F2v|;=%ML()$#1QDsv*K@d-|n@96MP$dQ>=x>PkYiR~v1?rDh+)s_m%i>}*Z0Ey zMEqfo60IFsPTz-3q(xc1nzr%fK!O=G?#ZboCNJ$vM#dev{1$HKJZx-rwQg-~{~X)* zV$dhiCY*X(!cwr$5>T|9f8pQTC(AwahHW8c!F2CWN4@!zik-AAhF+Yc;%5l^}uKBg<*-m=yoaxeUP0GEQN1?e=%PvEE?z)fP}tls%ZN1@)oBPwx@F{CG}d|>~dM@G}gAgKrM zaqp&b`E|tlu};!nmxr74HmQkP@*jccpTbJ3?EG=Y`WAePy+o6krZ-xe8=CCeJ`N{| zF9`4(&vR1f?bMj23_JWjy5s%iKrV1yq1pL)ruuiG&xc3NxG%#81+j)h>aK@T$`t!v zW@2weG54Jb@Eqr;`bXNhIU`@WGuNC5kA<^K)3)_L>$jHs0#G)3&2!qNb6l^es)oxLyWmJIk3UaZq?3z(S>DnvD=e?%Mw~>;B(CY8{F195wyM>dW4FE{@rMfYS;)F~5|HrQG?fCD!1>`c`}s z;6qhXQR?tP&&jl@$851r*4``RH_TJpQbOeW-ZG(mho*Bsx!giVs281)OVgLv3S=ti z!2$K((!lucI>CEHx1@TJe?a_?(yjJR?I&_uDK1Ms%m)_5Dp>2$;?^qG6i;VvdeW{gUoOkim{f zjzgU1q2{P|J#TY*nqWN9aSKIFz;;Qab~;r6rE+sglk}huE!U^-YPu!M zZo9T8be5gG$bC=_qqkGsk54B*DdZ7A?pD+FD=k*eEk{k2XSbbqBdOqz=tptdyx#Ld z)jSz{Cb{7F{Obu=z2=xW9`ZrQ3_Gan=y=4iJy&&N=-T#~;tLUfrbr!gwbby)-`!!@ z{)e>)Rw`@0EYmzUb*J0vqD2~%-b8XLW;A}=+oI&04*mK9L_RdoBk@0mFLZ}cUP#MC zPw>i|q5Q|?R5Pn599JRWZE5xY019dPoX0aXUR>N@iDUU6Bex=uxlX(>%`%q4b&KG+dhtNlxRiv{7)dpz}#e;bG>PMZ&`7lAXfCfu_-Lu_ZS(BOOf7IWz$J zOo|Gyv`|fgl7S0IC#8%qJilY~{nv@q3SV15@O}axM z3G!!aG_kO3k)99GHqTMK()>H&*fhI2FD~sbZY-5JRVE=UL_73Qa2wwg(ByO)mxuH% zAH-Tco|$PBaq3b?msWvu1H}|&#HKD(GB{8l9q@ZsL7-?C(B40YJV7elYLlkVkYX!% z`uS03oM-PhRy16bfbliAg5=a?(6kQ^{kL0*fMIH_YLJ#t`{bQAfG~R=PaW$@@54{2 zd_&ad(=YBcyOp!Gm_4x@UEIir%|-=WVMkgn2TkBF4r@Lf(Ph z)Mq7*dXvEP1Dx9Mrnh^jo9!!B@|ycgoIxy*{{X9$GKN=C{3L_Q_B016sjc{%Lh#}# z)-KW)fAw+w=^wYsiuatTjlZ5s9{mBXkELUN++#^;{o%3PcI< zCq*-)cQ)FEq%u0jmlN7Zkd||_4a9Mhdy2sS0EL+L7ZP}DS+HxZZYhKwRi-?dNkHMl z$Ut5?smTVNhS}?exD7YP(n+dY=zby7Cep61)yx~+C5`qBk+})lanCp$RM?jjy$3bH zU-<6THBqAYe$H)b~4JLha(`!nESo5JJE7^S?p(;WNwqjlSD%# ziWx}gRE&f8iUe|c{+)Gea=s_gl(pTu?YMz>`(3!sQ1;%L-5K@yasf5d#dm2muLKh$ zl1GBdtO;E7$Ono5gHOM-)a;?uY+OkN#A_sN>aEE7A5mPrik}a7noTc(W2<0U;jmGqBaM<=B8pDR>Elc7vNvDnJr&v4}Qm@@I%ObKzk<_SCPo`)cgI0;}yjC=u zU1w0!uA0zX2%=ccyCShtM)hx-INUy!OU0J=^Z3h0hf2Ep70tzs^lLuBCd_$k4%60v zu?D5Ac!$Dw))o`$aqACxIz+aQ0865*erF94+7$;pra0?anhuwD;7xK5?bzVIePUw> z)YQ4S5|!MsxN_J+7XYd4$e?yTlpaW?1B03gu=BvG*LPO-R-DVs*ICB z6?IKg!$z}f{YF6?OMo~fnU4WgM=Zqj1Cw0NhhaXoa-J*J<7APBY%O13TdL zXVWdCwoar*p>lmm9Fy*AG7lE`R_brF+v?VGNt|t2lRwUYIwS2-qokvBOs)??q#xx- z2{JgQNb3|$Gn^^{$Bgg>O#oHYbiF%QfZFPI@!ZCF+aiX+>_O@X>_;@q3+qi=Plr&l za`0TnB#nktm0`i`LH<<0=4Afb@Q?3M%c^MOmsch3?N|HCdd(Q_3El@n-6`+jxDmX| z9amJgld}uAn1A0hZ&UeD2VZM-ZGCMW?VO7&GbnU)IaX89gI-->r}(45x0B5t?Q2Q7 zEi9=DEOWs9T11R|ueYT$62wDQq!grTgo!5Jx5SM0AT73@g_}ekc%N>9v`upOQ;`EtWVI_f1YxBQv;mRJUTSb5Ni7Q zlsjsz3ECG1F%oUTIXTJapsJF1j^^vc>EZ1b5h_|0eMWctkt?a$8gr8&I46t&@6ZE) zve3L$d1+_i8SIU%rLLnT?bYmzj3fefmN62mXDkTGIK_6p7uN2z-D_MgaK1&2$h3;o zCfB;e03h8nv zxEup~Jh7Hbm7h_+Hn%qt+%=?gc~ItKxq;XN5t4efciV+4fy&)@FT=BGw)$i?=F%%i z^C32_VUZky3kO^rkVrYsYpCjSX@Tb63eg6IHo4)cAedV{!JY|ONm(J5HBa@O3jz*v z&vRX3Yxb?;H}K2HBF?tL=^T&U$eG&PPpX_}Gy#L*9}eGmaps=-=@uapKva-J`5!EE z7dRn|cnrAeI@OCC*!)GS>%JP+?O>h@31ITk({Om?K+0G)K_P*`AJTw12(_3rFBDs8 z9t^!*Qr`OMn(I$YNVcKLGE9>%2m>6IVtocH)@*d`J4}o0`fapU(fj(WSX<(QsH zGy#R5A6+)=X_gDwTh6*~%osrYjdoN4$cTJ+hjF^n=L7hBiJ~X|1?Ij#BVA$jpbkiC z-Y&IL*V-g=fsgp{x>s-e=4Ffj07A`m-Dm@qgT`7Rm2I^nE}1(JqC4o!Ib-}QArr-9cM-;jrb#4r=!l97GkXEko|I??wli2-M~JlTW=XAK62WzEgDt(n7^P5pHq+Hl zW*HRKz~6eRZmkyOY+7U&4u(aGaNbc!1%mY;5;AB4$YV)@DnNODIHps$256$-6i@+0 z02EbS0QJTxfZ~fF9NlqEsQ{vY3Mc>^eQ2NugPLI+&;*$k9RN#A4eLM;rf#4I(MST2 zoKOPN1DaqXIw0KSD99Nn11EvjyywMIH;Fa7KL%NHuuB#8p(jxsQMTyg&-dhE)36l4 z^oq;YVASq(>0r^W64L4L5@{363C|~PMou%2dXOq>-X_vKIU1cpLetB{Hqfb(c~7E7 zyq~W@RrC)D>ADlW;4gQ7H~Q<#gT7}csLA`Q=&OJ#fWtapiF{f&Ywp@knokxs?tJ-I zx1TcOZhNABCcO@j90!QC4GY4S7JeUwaiS&pc;LB-5Jv+b-MRPi^L^alo_`vr@dC?H zy1vlvY_yZBM`=9LPY5UhR*z;5^NvZ#`cMSAKBMADuHj4lZ%4ne8Q&rU9PTiA7?7Yj z$81&{9um{MLuGwsKZLFHLnXT`HxJ~qiC7Q{i1J3>xu6c)$5(eYUJKQ4tfpvXOJ`{1 zjNyiRs}6_Iiu0r34-uaZ$sVh6_m}qu-poY|gh6j)Bt>;qJ5`%tB(WLv=aK>J_6A3> zw|NOH(IF)FB}RW*`S(QEHHiFoYkR1DzBs44n$~|XNS3jy0+#V2k+kDzEJitDgVvB( z`Y?RNfrE|@v9F!=e+78sTGl*5EnZtICY;?{s>l%CP8C)>5ONd%PDub9amNn9W9Xb3 z@Jj>YTTh7^W!H;DvB`ht$88)}A)-}C`WVMhKZ`%aLC+Li4{yKJEHt%%t6WN)`kf6D{Vtgy@ETNJBX%= zRw)~!hE`=H5TqVRAmk59@GT$09xTx{pA1`S(W~j&eZ;FAc7>$~%&z81PSxeyfDSMK z#~I|XA4G5n4d0;kuM5y@bt|oRR`EWyq+Q(2WMsM1=AD`;G6MyHAr4a*ILP3UliG^~ zk9Ea(zr)M#5NrC2Yg6hsPXr;Ny^46%E!t)o*>1c#c|91o$)*-Pq;Xy$q4=l8Hy;sB zlY4s&?V%F`4S`u9gbeIyTW;ke7$C7ZJbF~vRz0}qwIwhQtuJo00NiJ#FK#FU8^jtN zuZXm3trGMD9n58OfOfZ4VV>%6MQnm|KpabT(h)zQJl0zB6L0Kz^lyw|U_D7CokpuJRuzRd~qgb@}fp_zzQ`Hon1Jn_=K zHiDBl-4{u>@RM5TnoW$-+uZq*Twb#)Fq902n>ZxDJdOuUV!J8=#}%x@wpNy?YtWD}|3D@Xv*AbiGGu@YSWN%B+&!eYLI}D=7zHbuG_KWLK}RN-h>W z^WndQCGeJqC7t4+xt4c#Wgvp9^2f;k0Ay$CO?nA9Ii@Qd2C3oi6WVImnwNyHZS97q zu-v7@Y~>_H+vdhW#z@9N7|6|cxgGk@7CgFNhOk4Rh^(X+I<$97d~R+=eEHX^zg#0J zD~@rT@@v$n5#^Q^I)0}e-i4<1ZE3 z*!VQTbEMl@M3L&3Og1#eFc)Mj4hpt2k@)a1-s7HfPf7*MV(6Y4(EK?Gi$=7L-*5rg z47=^=z0b_udz#*$EOOp5vzl!ROWP&gcc(Jm+vBNtc`6V4X}M3auAqQH1Y{nfft75o zpwukjzqVk|&l<>tbSmHwe_G}I7{<=p_rt{J`zGz})Bd=)62v}=ju-HtIjnW<)~f0k z_L^0U+K!;&7$H*b$DRPs1F1fMa(hrc4?6LNl@z`%&@QChBfb;q_Zu0JnHdYH+@vtU zUdFpkI`>5JHJz@rVp8rEVxZ8b?%`(x=kE^WL03IO}m4F!>-Ul_O7ykJf`A( zQ&H3{JWJx}{{XTytvY2k)~ENgNj#^^NhR6{P&aS~O!8~h8wY|iC;`+Qftmp0d{zGd z6D#d2!#O@(?8bYM>u+m^W%R>2MI*Q-g~iX0WF%+oT4Du0_cFHc(jh?3%cV0U@jweA zEU~msGOh}d>IlK7W{{2&9W%r7YSVa1>q@ixHMOn0w%gHIy0K+p^9+)uM>!{+YKMtD zKc;K`HPc|!V`kf7aN(nNjnt&E1tE}vF}RP$m>W;vFAD+wjb?0)bGp(=EzW=Kn+(?z ztiz)COYM_EsA|u5r^6M*cF%PqM=zTDBU~Kr1^8|{PzS2nStggK8%3H~krlpshcZ4e zcr1A!d)Fq{J}J`l4NFn+Eyb+fRh*M5Tg1__FhWX_Sb{qalmTbq*dxIjIPdml{${z| zCspwTHgF?TJl4Qju-%;v6DO{6%mFVp|CQpn`f;uQo3ocw+j(;@V3`luKi!$CzMqMVLOF+|(Grpm`{_*pbo;v+h#qr5 z5@!dBX6gXtw7&}4X+9y;{6_@RG`gPM@wPTD+;rnTYp0+LjWEc<*d<{{R|mciM%-cJRkzU>ZqSB8;@S$v7E3tC{hSg|9De zFEyL1WxIka+X>Pn9%H?<18mAu9G{uE=Od*sJwr~`be&=spGUif^<&|@msi2YKsf22 zdf;>jQ&RD~-XYO!B-8ZRZ4I+Lk+cTtQo)pnh#Vl}aT)DJfa)~A7U_5198VKpAME>m z%^ad~KX&ToGNh08hZW@>Ebvc?Be1-N(pcrvEv}^0Be#q*Pce~$G8ModAT759=hHn4 z0q?-)r#0nTpNf~pGC|?52fU0f3Y+Votr)g|5Lq%nB1ZeT9FNohdMuh%oVu(!%PIRz z5WEV&@v=gsf;;lORqg-)cAyN4JsLYba{mBZSqXdi;Ebj?Um!Oi_s;LqwgY<51gdk^ znLrX`bg1b8E_+l|0ZUcYZM1DR<5auPmu`i~9a|j~dX_u^>T8$Qv#j`&PtZp&S%IkC zj>l=+ZinjmfA|NzFfVvxOS-bPlTy`r)U_u2MbZA@KXM`d%u9tHrzfedn{m%H0h_IA zE2QZcx})TaSR|L_UB~5I{oIWHW9wG0AxoP%?c^CrVsv5b%0&QiR`wn}o+$3q#1@RQ zqPmZ;{J3F}{{Srku4BPI9oOXWrki)A_=f7)WQABmBymj|`l#GDZS)!G(t(|s9}umM z;Wn3jkKr-PY)||MG=uc0Ej20hNS^n^{volsvMkW6h?W);DZyRBMg*=1W!sVMiUep{ zc#iq}M{nWl`!#{()6L^D%fD=kAV!RQpb>+R-F>SdU0UCG{{T+Cv5{tmT_WL@L?Ej# zogN4xfIS+b$)p6dzM_vQWW+8i#@}H4hGIcHVxMr^>Snn{Y!x1<~B3 z#7QP3ZNQLnxao=l#;=O?rPF+Kq{(puTtRCPQ-T1A?<5f{$34b2@y-dXABfUhopKo0 z_lH)qh6{4M?~yK%l&Kzpi)3`55!!gA;Dg0p9hT99YCmasT@?Jr(c@zwa6g4uDFpZF zTCiz;7VvGI&Z!(pXo|U>;kQX41CN>8bGPR}9M6w5n@j%y5O{7|;kMpOiEWxl*u^@# z##%D6t{4o1$3kms#t7PugJ*lEDY)5vWpNus3uQdj%M5ndNhi{PJo*bgd&530Rq+O= zsK=tT>MW%bdC;>l`D3_d%Px9yF`o6;c>7ndzSAyzBd6`Y)b$iGM{gP2V+4=h$VNgc zb;%!+oIVJU%|aNj7)a6S006nsDM?4TV}?K+e7KWw;?RVT09ED&?(C)c*h@ zT(sT-@Kob$nk|_B0CFWy{Rb2Q*=;z_QCu1Lf8q4Vvy%GLKl8%&bOY!&kSGJX!LB3z z5nW;>BE!U*lzxIy1i*eT9#7(E1GnT?6g2N1!^B$MoY?8-Sdia|NkIJt0Cu@2t#GD4 zi7Mam^Q1^W>xAq70JRMOb^3+W)_Q%--MlSvEHONZ68-08QbLb%Ym#U9mk=9c(dGXD zflnXwY5?YZ8REYM==vbI)OCmsqk2uv^E)3f%>1&IBOoW;0~`TcyF7E|`SNJ`ERK4x zox>l*!$1yiiCQhl14OjcHCAJV`!%zKKKta_xcALi{?zz^5F4~BCLioy6(6ir0n+IA z*Sd|LpMPTddzK(WE33ZI(T>7E&!%gc{@U^3{{WeK)O-H`Xb=7~Kpn}*;MWpo_=$SJ zBpPO%o!Q=7ru-BLhwDHcz|Ve_#4d&6*}iMP63H3vwziO>{{ZAcBYw02+;r(&LGZSy z{{ZWJNvb!l-?O0oA)pT98Llw+K_%jCS5$oU#nUE#?|`%c+7)i#*9jkjv`Nr6+jSY# z{{RuxZWtVW9+({ly``?JVfI(NpHF3xWLfRpM(R-Ha&ibg!5B5iYJU&3>Aa-VVzdAf zUQI$dZ=~n@zHGQ3j%fi;PK&`lAJ!z*FLlS&wP1w)-`?=@dwypdS zf*&IDRlIo5NbSm6BIC083ayW;8UXCAZ*A`_qq~T*#~|FkYP3y{b#NGdBDl}^R~`>@ zo0~_vKDRSVG5-L^*w6=bvqZ-WBK{0N#<(~5XW%zozu7Rax|UWSLJnzxt>Rx3zlNDN zSY2JD(MZv~v~4f%mm6atLB{OxPaIZA_<5zbtCF20dK-y z9&1ks#ieVp2`0M|O?w;QgC)ZkbvWSTIP2P|n0_K?WV91o_=4GTc1i`c*RT%1yJOd= zA6fwH^xZ2=(e3`prCUX75a(n_5NFimtzu|?Fw!-XgZ%r~p>(Bi5|9vR0S`T?K*azpMtbIf zT$4VO#XYDhr7T1Q4^lDt`*6-d@M2 z6bC8VmZswe2AIUpIjjpUdK=P5KjZ6tbHn6ngw%BGFt<^ zLp{xR;m;A+_?J%meVAlq zc8X}98{(BXWNZ`odX8(0_-`G8OX3!=c;-jdBz2ZH+^&v_>w*pl8;Ih7JvQdz+S1-j ztH~A^Vg;Stg;i11l6V5Sj~i(hz9I0{&XHj09FrLL7bhDP!#Lm`pkjfYT9suCgb;ZI zatN+7z!zU@@LrK7a#qa%YV>{8ieQrO7p zoOhs<&Wk3z&rJB0X(ovJjdfDz_z9>P{#T)O(uX?EFLGcxHm@#8y@| znq=UXoi4)|c*xzj0NFfzv@&9I5}aTLC;DDV`v;!4X25`JE&@M==L}8&3OqS;#SGZ{op$s zcgIQs=K7qsHBarfLi_tO&yVe1U!COc8~nWO-M6kY#%Lcg#b}z8GkD*`aY7t<5Zaj$ zmud45NC_G08Ft|G#wv%3ET_?YQDNg~r(%*#2A^>d04Rb@$_e!%Mh84#3JKiJ@a~Mh zJ&inH;twE=U<-MNfojrff^(U zfsMi0xzBviO^iVYkKyGV#KpeJ zoMdDZgNm_sr%my_z0>~ysnx9wsXc_&>|~66SsDKT@0to{VL%3gOf7FnywyZMW7O_6 z)`+nt0Ec*NaKXXpjw^Qr^`K{)pNCOg-u=5()%7dM3JA_+5oJNnGZ<5kY~r;%Rxa-^ zG|dhR$*f|!X{4TL2w4npN4sLS4&0InBe0+fn$N@SA5)J@j@-4s_Ef6L)=ly64#>rh zPUZmaB#?fz?+ohK33!@C-QmBsy0jO%edB zrE4#3X>}ymK2&jpnWd0qE6<#i2Z4YHTnzDATJMi+^n2UQ8VhH*)S*az&jOK~i6_n# zM^FcU+z zO+p(*Sni`_V=Q}>h5$aJY46*ipbp|+1nE*~_qu$cq>ke1OW1}=G!n=;*y)A?c2Cfe z(yQOwd`5R`TMOMeZQmeVG%hAZJw8m7EIoK0)B!icn60AlR+si@!e$-L0LSlgcalAK z4r_D6+AJO((XF&=cWD;gNZ#F9n3iTgQc0u+4}`I0scYIGCvCm|0G}G3(cL1+$F}{{ zwvMoTMARVLgL9@^OZ73yBA?KTV0P8V_={b=wbX2UJENGQzSIM;p$ul%%= zo)eE!X#vn`dbXRRTbsRB;yY;i0_O#fqK>RjpcRv*_)AXHu0OOc+3)oO_pa`Uc?NqR zz*HW}1u!TLbHuvI+u`jvTEN{d^&Frh)h*ZOKky>8+B&R?%KU}|A8>e}44odwQL(pW z)inmxCy|SnD6yjd0J4g`6nm0=>zPl69t}XN6{N@%05)9CI3Gu#4*viR=xc=^4?H-B zgkU6a9zd2wOJ3A~Fw4t4N>! z07dy`fIDj9yq}10XJyuxaytfwSV{atxALjx1Gdd^AKD)fVjn)0r_6upo#U_mg%wa8 z7Nf3dIy{g?s$8wHxni+2fP^eD!F+8}bHF4M&N>?8E;X+iOA}k)_)%^na6>9x#Og=5 zkS#Dh5pcjVkVA3^Q z2|QIBLVP<7rNmog6Q#qnZb<+p#sGnibDwJH#&gd~0P{}`-1x&l(QLG4@cES5MzQ&e zcNW&iQ-jC{@vlwbPzQqe4@|n${9St5#j*%=X)bo58_{lV<=%F&05Me<&Ie=ZUh)-G zk^mVwKIVYAm4l zN#ilb(6EL>7?LL-v~P~8NC%8`^q>w~UGb)|;e97a)UM>V)P>yAhkHfZvq$BV5KOVb zEsT-Zpstff@QtOOlWC*szE%2Zllhj6S8GNA%5E7sWzSxRjwl0>)4Xxwn?<_Pb(?5s z)>`TqZJ|TuLrY9wi+HW>H!v#5A z#m0RzngE00*lGL|qswxko;z?5k&>Vi6oPtuwbW?(bb3~mX{JT@En!H9`~g7zbOEJ+ z$+dtW@ImNnp4Plir|47KSV|t^^7d7Z8)!gjNGF#gvMA(}Kn~Y@F{1c~%Kp}c}L#Q-uWqJ}kwC0OG@g-`;h9e^Dx%RF=+ z#M+y9Lfp$2*(_$%UPU`p$`@_8$T=W3Sd4W86anj51A|x|}rJFs}oVQP9L8ny19b#pYTnslu= zg-8h`gkekZpOdJ;9eJP)`fhpaKniP_*1SUYSMcaMMY`VVw+LleT2Cz_h#VOtY!Vkd z00AWXWYJ(cVO&Rsd|KMA_2!GK{hHfQwVk0`X&4aGBvUHy(pDSPrX^UVC@rfu!n?T+Lwg!wFCr9zzF^aBzPr0MGFSmHz;NH0AL=ov7X2 zNdv@}P+dx7itEjAg!7*$g4qQI2TWqLJ|{^9_kypjt!7CiS?v*mFk-CJ$AEB74>XV- zlBX&UO!G`!g#h*dev~jIW>rG$KvfweoB#)NT*r+x>y2B&xB7j&{{Ut(L`%LmW;OYp zJALpr=eB9s4eKlWElSyMH0k5Hfr$$wOdSbRgMv8V@m!C?yJ#TrUY5{Cv60ERkHqkXdv_+*f*9jCeKiU=- z7U#7w?Qz6+jPB{%5j#PvAR^XHtNC?EIL zYr;)(?htNj0Ai>Dsiz5?xzIBZ1R% zFF5>ZYzFuv)|d_rARomusp5bioOLyqW2OTk z{b&JeBHv%pZna7DTWh$U_S^kz8N;st^gTex$sH&ITpU+1{{RVommi5g;TD=Zh_0Li z6SPg_$V!$WU#H3kB=MdzngHqP2_2~bv`_&|$vr86xvp`wePMh<{gwTbD_E5TUkMXn zI3$KnqXz(YC)$8IQP+yEF`LPg$SNh;c5lwz*yA|o8S6k6bKHvLyep`9uId@B{8wQe zw9t?icw`L};OCWGmRxbytpIia@tWjx%}2$JuUyz_+H@6FFF$LG?Sp4B2ml-!JUptLO+ z6cb@-r=3YwFG@WL3y>Bww?FY*}x!Azu5X4}s6(fSc;()p4J}|MD!`BU?>CybU z?wt;s9>Ktt7ZLN1`ez|OMK#}V@s5nE_K2g`U`~$^SZ;@IQID3#9Y`zmpbj4OHO&gk zT=1`i?c{kRx`m?h3adkJaS#m3Nf;&3fIW{j+G+Z?hcC7E)K=x~Z{FNn$!l;tre#Ll zr(=LgLP!Np4tdWM0p=}lZDHX%oigIa-oc>NEo}8iw=7ldgDioW&&+VGh9|Ely?eY` zeUuYiT|o-0mxWqp13qCVL~jq- zC6+ebj{8ax&s^fV&BlPa<{B4-JU?sV4PL_UN4I73W4g4F;8sx+pam6(z$)ra-r28D z-g;(%-0*9wS+t)XUTZe-IkVI6;F5Ul)UjBnw(|G+X~_YHI2|k9fKbfp0A}h(1mo6# zx#hZV#HgUuFQ3Hd+I^FV<QqGQ?ASa!B$Gihg21Ju%m&Ytb@KS_x;6 z_@!a^j|Q7Cz0!3`9;Tb$oxB@T-@Ht7DFd%5m(T_D@q=+}1%c)w*f*EB5Gr=N^HsjMQNCv&& z@+ck8Jn#;KtZDFE>mDY@_MPdCU2+@CiX{q(kVxy2aqC^p<>j86Z1=i^$!`Q>EhL8u z1u|@L-x9uW2V6#@_maDo-`d#2r5FPI}-BFH!Sk}D7n6omw5q4lD` zdic+Jqj#v=+FU_rCgVNGec>z*Ied_!)Q`hu`%n|xBl#U+XR zGC1TKt)}>+PQGhPYmoLgw~HZ5TYSFa!30xvjR|!L;?(SA`zEDs2!`Mcf~S)li^lld zxb4&saB9$8=&PBq@enr)rR$M>g8cBR$1zNCoWUmnx*wF`hjET*5sTw_KFNQg-xk`Q zW!w$)lG;K)RTO+q(Qmc?00_%trpD4-31W3s>y6lKXB`xi^r4iDFAw+#m`52{gPbyyu_=8Z>Z9F$|VW;U< zHu6Ozvs*cmNg5^%n8wuDT#i_hMtj$-f$K%U^Qe3$cK-kpE__vVtM+8KMYiW8$W$RX z?XU*=?%?&WN2Ue}=-YfnF1}nJHqtw7gmMgS&Kp1QEOf2hW`G$=MkxTG1Ja_FXf2_8 zo0k#A5|(#kfT}u@Xak8qy!f61XZJce-|73w#^3z}(>xkx)wK;v#xjDDY6T#(Rv15J zgCSr32E>u=gG>&yR?&2=a@*{hwYpqJ4&@FxR_VD&Qg@Tix;9xq} zL!L1}9$_`Mx1@NQ$3e2zZj4v9^GJr|M>to5Z_CQ75JBXY&20FYLU@bAH;00^lEO&z zmIveWQ9uuuUb^t}Bt9m&g62uHE-9C#aof;X4k zHjIurHJ2rw)L#=XlTVlIlIl8IOtTonfbqJce58O*0`=)Y9VU&d>hS9__-n*oQ{4Gq zX|}k5N+i&N6|#h#tfY*L@(o+jJTGme-OZ}pPc8nVaG@l+xFEj3bFlycQ8UgIbKZbD zYAFCwKp6VXzPEL5u;`ai*}D_ISz~a;SJx*u z7jvD`$bB=>RCZCou7sKZ;`Cs2?-gqrOJ{A-k5Pxy4x%{}eHn6p-lur2TThb5#WC^y zn;~0_V*Z%;3(gU%eV1UX)0x}5bLFi}zsyeBVX;+3GB>OzOaqF<+m>iV; zJhM@8cV}aDcc)pgTZ`Dj2$N~Ul?=h83*4wa*{*9&)3sj`cq>HEwBZ%Ll{KtCZMfci z<_rF^B#aCV#B$rao;^T4eF%{Rk@Bno1au(sE5f{2B)Uess$Nfjb*EgU5I&A0E^X%6 zn@mMOD$ZD7spkY5Rtp}@7Okeudv~WSz)xkf&r%56<}T$UBj(8T&pdUm1bABeUhz(o z;_X5)1R~BEZY{~=T*MflE8Ho3=cpv>#?`DllGQ^GpQPtUP1lh`+LR zC~dT@K3*(qz2RwYHa8 zz1sH|;GKAE$d8felD9Y*7$pcKbI|eliU9F=JaqcLucB#QDS+B}SAJ=7 z4$0UlNEpTUsA8#@;WE7E(Ne-l`CpafP zMFTtDr_!;sO>)m&wKo>wqmnbScpr%&kab5Li0pb1$l`$;K>E{q^q>wV+zI?=u1gx^ z#iv_a{apF3N7oFs(b;J>_BXfJ@fI>_60EXr#sNsyFc%%Lx#v942b_E<(ELH+t848u zShw<8REFXv2XgKxP@yA@%d`>(de^N|xE@>JVd5VRYN_F05dEFaa5`OqQ z9BwCYJpcoW^)8fL4%fvR zMZoZRyba>3$@PC0Y7xYm(4HCWQU}WBExUj`aOeVa+!N5(ywPyE$Y}SP)V?UzZ8aU@ z)e_cQMJfU)bRs|)4o>iLGHb7<2Qzbj;_H1~t>Mz(y0B(nE_ApNx~>N8=*oG{agS>1 z7w-xH`Of;^UDf_4TwLgLt*ygJb8j8QpFfohF+`442k{1SS0@=Jw>%Cdy!eBy?-)%w zOQE0dHQcI4(*i%nfs{OB;QPHke-CK>9W#WG&u=x#{L&L}@tI_I#|4*hCytmE@44(K z<`vHx@v5hX^=&TON3v;3>9=;aF+_x?pD9Ec#sC{~SduH*vx-*(#PtsXc!N{Y(@2(I zCf7o>RnydC{U-4c6EluLKys?25_$Z6v@smfaJl9BzNIFsp?Gh`7Pj(EEv{p@nmACr zijNyc;Z<@;3_v2fZxref_-De`w|aDI6q{LKNY?<|Ts9QocP`mKoiH%GDdQbOT(s4E zOQ^&4Z7$V{Tf&i+n-L7iNgsIgi~)dh262NkPg7$NHfzYaKQT!OLO7B4&PlR?dF|rEb)-bBSgD_2O#Ge;LryV zec^8nYTpoA+UCLrNVM-hW!M0DC6{WmV;Nu+f%UIMO)k5w z*DPoQlkqN#XAYBL;oTb1E#kV=ZMQ55wN)a2IoRzSaGg(l3hxQU7X!p?{6pfO327JF zbQ)CGM@ydO;%keRD;L>}g_FyEOR-a)KJR{buWFh&Zee?Q)I4%2MpMwM3=#N>R|AOg zZ;GDF!jWpaeU!J>+KemmmO(OASxjS3;cTz>P#Svg|%0Vr+__6I>j-#RJ8m^!H zm49rHY%#R}+pvy>3gD?7h`|&AljDXqehSqiw_Szpnh2GCQMzBpSK12{ zHxgdl`A(8XC%cwFjEoo+3`iW0PMlB$4-$B$=fWC=jif4%Wd@ybaWs2=a$pLvfO~RB z>s^ihg`{e-$v&ZOvpldpvlh?#r#WRFnLOvM7X#<}+dqo>4xHX9lKI{*EM78s{{Xv( zdEW6fP8)dnurS;XhP|rW!5SnQb>5gw%&}fYRg#4s|NecJ1(rq- zkp^ibq&pUs?oJh?Te?IRknWTc7HJ9TMv+FOcIlEvx*PF1-uHLz-(SxRvxBg6c86Kd z=N;Gey1obkK7IOWlU+B|DpzMxsubJ_?sdFf%8OAUNr8B^pL_xYX9~ujusRIMFF}L` zk2-!Qet9lCP1Fv0G$v6VzXfCYr&Jn7QXNR2H*Y^Bu&&%dFqa(L0ZZ(lzl<`CiY~rz zL}>D(4Pjd&8-JVagX~QiK3ZR~3aQR?qN05y-ZNFS+|_tb-P7I5E>YQy2U3j|qAL~t zqtHX~YaLt2%xO_b6kHOmw1FUk3_U`^oQv?V!&e38GcjRb$TXoXcupA2EDCVbq;%p6 ztQYYk6h+Bf@E36<$F?Laf>eMfs~qn`)+iu>X50gjmeCpo1Sm9t8l9w!)=a#nYh{Zl z+MLrA%#6Qd7un}U)<{9xM77Vd$O1G!`$OJ3O`1HFD7@;AfyH*S1@gbN8spY+dS}Q) z)^k#QKeR(EV(IQXFj^ko<++>5vJG-?b_O&kCBbeA49s&lgqhtyI&9CZ7uMc;;}o9_ z6_H~Qe(+73=AEj;7m6MgE2iY=K00~=#&4;oSNM)IzSOd5r&tKsHtqxT0F7H|oRtyT zxf#xFtVD!|$UW1d8(&g6sj-m2#~YQB50-rgl^axCn8Ufwe~yaD82-rQ;n67xe8-sx zs#w%k)LPzEmzr20PxQ?%uYRnJc}UO(PcgSBL_LC-P2T?W&Y7Xey;&-JC~NU$qv^y7 zjJyadE3TqhFQ0cki2U5K%j@uSU#Q)0EZFoi`f2RIEL6T@6HVud#q1aAyI!#wlHld@diiuYbv~M#Wl*4Ev8LY%gMFt(gAp+=F}K0S+m2 zScg%eCmz2mDgCHp(e%SlZArXNdcKlZ1@au|*2j(-bRt zQe2*f^-)4;Y!-vkHC9#FSOm*B_vHFCSh1hWoTFYu1VUfBF9$}vV+j*ApK@gWcH@XD zCx*y*CgPqv>T=Sl3#oG+=9{AY4E=gz5MM74?O>ut$25EsCL#aN5fbP*x8@(9%t%enhskaT*4PV_&B{evugc z07+HPk@E7SR*C8lBDTUdMI~*bc5Lx!{sLiw3mPgY(qBf+H+pkz$;wMfc?GX~?t4oZ zh|>3aRNR7}c;oGrVHA^h0P~x^d&~ozSbDT~)>~S?>{^^lspLwHyF3ybPK^>xIsFhF zOE;i;ATBFvw%4Wk+x^!;LMjbC*pcmC0aQ_zomYd6jd8jPhjK&yL5!RusB_WTI?VUo1e&TN>@Rl5+t-rYz@UotpgTQX$ohxB08M34(Q5NgVyeqm@pZqr^PHo- zDx80%Q6q6OGa@$YYiiWcw(xur2bzS}MI7XL$8qp)l|d2ue1 z_RFy@M-2I0+R~BojrcLYD3YMm!zPeSc70jwp4ERb-S0WH#((Px}MjGM#ymxY|;mQEiKxY9GRRZ6cu+wQP$o`w7F<=qrlY^VWuUKfL z@f7Dfb}pQTC(-{&I1jpWb}$*2*=|bRCr$d(LuVAjlFb482%FHPy|`0Yh4ndA3a2D# zz?y4f-?LrWHM&oTCuByD$UXubnWucF%EmsqUC5%3Z0AD}TUcG8EwKugDYrGm@Js0m zou2}n%NBLwgp*MnA%(x@O$h!19>U90jY3}vLA=G-{LkEDg0|M=x1$RjL5pyJ+4<^K z(1a2;!p0UZ<*&^_B*OMnN`dM0^Sz@<1d$&Vpk-v-&|F}v-+N%t9Jc_2_ib4G{w&6P zj)4$#=HcGsUTbYld>!`S2~%~49!-+NFx8aq(fsyx0kJu<_xt3mI$JAKf))LD`Wv?7 zMFi{U>#QEXn#Wn}8%=mN0Vs86D;r{Q- z4>xQb2EQZevav4(Z_0!`9(Y82y6;E zGaEi{gV6ZLbuWcdB5kaS9A?%2e2nh#$E?9k3ID_ z!b^WmaJsiAKKU*%jz#iL7t#<@6VOX|$^eF(uZ0Im(!H+}cjUW2-jhCZCl2y3WhuO5 zQe9;ZgMU9uxBMC|xgs$a3Y89d2Wt5*rBi$2ane7K`}LL*|7r6DJp_#*8l%xg@H(U7HTd%MU*4suWBo}La z;o{CaMznb*mtCf>NL+U?OA|epEw=mn^*9C7*BRh~)w-nkzK@Hx>V*z8=t?P*nq+2* zS<;vvs=YJJ#5DevRXF$%jYPbn(~sE7z1op<_zNI6 zN)!xCb7yZY9#L~SeuA~mOIRG@!fCDC3MH-B#hR)S=iM5tGW=kWn@6SUmNZjtB=D4b95QiY`sJ3fT9wbg_@Y2KDyky~8yw z`dvHD_?7ewlM^>vh|N)8c>e3O3wry6K&+a&p}#;}h{>odR?Tj(P^5w-L}3d~GiQIt z^o;^lH&UDT7f5UT(ipErWfecV(o!A;tSN+Uj0+E}n9Mxs-yBfw0?WjA| z@ejG8H;xOMPiD-3@5+dZ7Fh>7M!IW-z;_2YAVWYORE-QLEm;hSMW%@vQphVTiqxmG zd%qD}9{3k^Vb3XOIDf6p0q7~FLAS0@YdWl*sL>bi*~oguil9mxV8+s8W5&)g%`q=B zdSh52qpyPN)M3680Bx6mwoBpRNoXNyh~vy28c?wm-teL}zaH<8lj{YHHKtS=T&f&} zd}lTQnWbsQ!?`?JC2Sc4`QRuu?iq#{mIedp6|ev$O3S2&{k0#|)r4n7t@Pz}BXyU< zh;S%J+FzlZTq?Mj>c@GcC%BAWNl>el$Z(kd*=w3tz&z$dg>@_o4&9e^$XotJvUQHK zC-3sCAB)thG}lS>G!NNe!+?v^xdwAKO$s`w*^ir;rpe2nJjxVkFtntlrLLw=q`4=X zpRO;6*faf_RlZ>Sk~MNgk_GLG)lY5S(s$u|W!c9ko~2~+Y*tPRR>F#6kq+z1T9oqBroG-Q?T7!^y_Fndg0p2=iPHdpRCH`{z2SMqEj ztfUQj39YwvHZfwm1c-N%2JVanQQC3@#U%{)0-Pb2@sG9WS&3aVU7PW+ltdWPuO(YrwVd-2T$_~Wmmt{`sl4X*l~S4 zd)^4$T|UNCJH9PrdOHtA*Zb*Jq<)N|D9?+-9QNwXlMi?UVHA;=IekoiUN>w)(l5C#^Wo8|kLXpJv%nfVaJ~ zJDliKaRx$59!ZA&W9Nq0Kka;DyAJ71cc1)-2wNGRJZ8dLd8y#P|9y1d& zk|bXKb>$JJ!sS()W8!)x-|;SWl1L74<6hdadBuH3t5|&TZVEgC`+^9sr(PwZGNGC1 zwnxhg{I32^wscD#Io5WCe}S)STn>ewfqbzoIF33;>|_E#DpQVdl%;_@g_Ri&(`bk~ z5-%-HSlmON8sDd4G&?o&0fg-<8D66*&H!slyfRSC9h!2%`%(kbP0guwATfnJ{!N#Y zRE_vb#VGVf2Yf$N;GC4!-!NEBdqogcZe_IG5H7Ki3V)2|>&3`kHV7HdRsVs&Q+3Y- zz__@7(k41yAF>~SkK_{C=KxNsIpjCp1CR6EWeADiRiqTcimcmaP4U-X44 zT7fMyIMiZG(N!0rH5_19hj8sbP(xxC_V5`h6O&;DBiuFl>^Mwe+`jUG&`<8U}wTjjGN^E$@YAGKmD}w9(knb*4U$l#^{WaCrbzFv< zNJlx!MD^il63u;3ZMJS9A*G;EGf}?Sr5U3_@KNC@&KiH>k19!!nUZLfmN`FI1Tuy9 zvbLh_DGP`O8Q4QyJ^7LoFQI!1oFuvA`QFQO@2jVn1&kEGb01`VuBE}*?0c5;;9BHa zSnqx?hoexe6dSyDcA=ZXnGy`xiwT(ySuEs)AgOaWqZ~5YSeKnuR0{YyQX{SFPKo@| z+bNFQFW2;Xy_Sx&n4`8S6&YfJ#YxmVSy+?F${|s_?CTJ|A>OlQrGpum`XCV~I@53nQeLL!M7=8b246u0{fxYbrvUNr)Ap zf#TN)YB@!$2Z~q&1%=?#qLg_$e9fccONyXf5G^3wi*a1F)ir zXN%*283U%96*AC54C+4|(@4w(RnY?Ig$|gKlc-t*J$@_>q-Bb`E5daC-B}R?Cm=HX zY6Lq2ET@DKbfSO`JBDhEbv_c;4jH5E2-AjU-iO#0f!r`2pB@lQpXv1PI)VtVWmPb9 z1PO-N;({TzNRR)fvxBpV{|qSp_kbN7h529V7>E$l;ps&EOKyKU%2BX_7&SH>ebJea zz|Oj|k*-k)alQHc7-KcEpoh;Liv=?}n@Slj@kQAQECZpXb-=b0Af5%@dcbC%l*kTN z&tJ(S0cMuc_8BWDg%g2M7Z}Ddr0VP#%FjaJ^nb@M7+jUKHCKoPlm+70A*l>>br4!5 zge}J7cVkKom^Ou?V0b`>ndnL|2as{Q#SU?^17Z7TMWfJ0?bnqZm>V%#W)EG90yqqb z0(<-_7UPlg+rnv`J3q87U(9*+gtTA5BosssUO<3gVulGr;_SL75ADV2B8&b7H^c?$ zu|}zg;m?#(vZ{(Fw_aNz<@4#)W8e+GW zE#wy)_b0v!*8bd0h|I;opOIF!;4?x6q!Y1mhYD(5{-$Pi}ISRx#< zCHE+%m}C*b>**wJX9}zn+cL4V)P&{k9NWKS9yP!F9;?th_D?afk6_S#~74F?7X_6SXaqy#=fb<&qFzF}8rZtxh zx)weYs1uG*-&C8C%hWmH)tW7{3nV*@9t{5Q$ShKDy*&wj(=gC`3GZNziLof{N8-({ zY+uuM*OIq1*0qlpHD{_kIAO6=)Osz8@Pcs;Fg3EaP;1a{eW0SvsT%#s{55qcY(}`Y zE-?vbUJ))jtb~z3b6#BTlixj+cC8;%1JZNqQ$$o;>&iM?OaXSCwrOv&*c3w!qu{1=Jt5MKK|_bms7JsZl5_Nml%7ozC$(Obzg~71(?acI zY)?v3?BB3egL;lf@p=v8Um5L%y>@IDJ&UrOhZq*cm9MC?<9@MY6m`#^>AwD5+2wE# z_tmqlOKk;aclqR}-b{!e0kUgUT%5^x*c!@KhPITIkI#2BkBEw;qYF%>3R5$V51jM= z0>;Pph?8;eXrL$Gw6cl(Fpoc0Y5RFqQoVxM{>VBVsNCJ3N@+HUG`jG_>f$3380YO3 zEi^x&oy5lH>wqTt8)Qx!UPkxWCi>J%4_Vo(y20OnnX`H-n|Rb8j?j~dI9sL6B3*MC zS`CEB3dr(QI8qW(6E|^e1$QyHgBpuq6lxNyzCKJePb-LuT@Q)|UVltfi>q(0Zx78UMX^MT>f)D6OvS%ije07j zz}oDe#qF<5=v%ljP}0k5JiKXVtu_9P9q-u%-qL>^cL?-HDAT*TUFlYh~CqFRY0p7|n4L(RnoEppj z_3|k|o%ROCqs$ayWiv98C_woc9MWg(#7o~Rpt{h2z_?54umg7M<*6qT%mp+js#r)@ zI7qig$WEpqXw-y1v|%LJvZ2!Ph+u->r!s-G{1JN~DZLn^lK;2x1*_i&ekf1y?1Eq; z$YVzT(Judky!`h79v?)eqr_yuc04P1d0KXe?XWG7U--YLC@O;K)7Jm$XK}n|*++0Z zLcz^;UV62cDZu_I%8pDt6PFc11~v2lb`b!v~2lSavc`Rq+v z@DMPi8Y=Z0sgK8wkHxC-OtqD$Xf0?6bJ`g0#sMsY&4hR0!u6Vam-bM|TJsh{R5 znH%B<2cV8^`d10_;qAez_S zwQ#ny=H@}qAjw@-8y0U_66{fRys5j^W&U@n!00FC`<|2qh85&;d~DPgOYvDH-+`ma zaP!RQmG=8L5~|;tQaW0yV@#N0s_n4h5^mVy_tBzfm83FZwI;N}{;emztRE#MRN}dr z>cc-qAQM{|_}D5Rzd)R!vsVj!QbYIRzWA2gw-nFT+n^RIoj^Jtc`Oxae0QQ1<ZfSv>V+x$w-(6`O=jjsiVaO98K1f0?=6J6;WA|B zbj5`L7E#}uf;T%v;y+Jw4k72o{98Q-h4QO6Z2NR5?^tNXqskzSH1AOS;<~qf_-)9#i4!XzshI(nXv85K%q+ zi-Oc|VM6;L{=QPA^D7g&x3v7#W#$T)1Zvep;>$lQ(nrP)$hO+eQgh)f*JcVn&&uz- z{xfd9s&#}l`;sR0hFuXEa|W?s*r)yaTIswpc%&P&`8mtwUXms2 z;%iAt6iP?tn{1)Q2lz42ip||g&0Gf~Vz1ro1K9LxCLQnQUK!Nh0Js65miLwPM>w9>wD+7eDNC zFC|`d@y`SIIW9Po@Fl#*4W@Wx*!a-Zf%|;dnYT?Vi+=%arp55G<0`8)`?Lu5!pNBJ zvrxN~i{v*}E9)p9rGY5h;VIP%&{5P2GP+T#QL5Myiix#PA#aLpB{8N~rDuw@*P)4r zo1tN$LcRG?gYz$60aFTiOm-sVT=~?9O)o+I?ozC|Ns(r*I+;Hs@wpWy9()r`K z8=0bYD<=8K@wg8G>0>tV13ii-QX}P*+8=g)+tM0hV*G*ax?VkPF+wqi zMz!-qV~q%-Zq-&m5MrN80i%)Rv%#0OUfXUk?KM(TZ6Vah~r zj0mXZCgw>mJ9Mn!t3b$=0l6f}C;ChymmH0&?WsoUlbx=hYu!4SMGmh+LREYFSiNGK zOxceq%>`^>5sldO4{2fAy!b&s8jk??{sBA4bTF|pTwN4AX;c8a^vz5lokm*+Lv6a` zH>BsYI9w@ND>({y=a13>+Yg)zB+ME-p(u(*F;jkK0?FSVP}ln){X7j{>zu@2hTPhKkFm}8J!*2f!0%unBHh&rc6kJafje`hI*=YfI#YuT*o}%qFw>yLkT6$p(zJ$W1xR>;9VgCjjufCM2|**SV+-v3 zJFqd<&I{6`oCwjq8#)o>OP#$o+}em(N!xX&KA<0+`|qW%R!%o zgNt$&GE#O?LFh9%!vKaY5$}-dU7t2-_a~bIlJ)whj{!7J?}p7P`pJouAV_CR2soRgO2D^{=~zyYdHN= zmzBudg5n^1ilg+Bvuk$mFK{0Ah3vzVUM9O&Kd9v)I5=cv8Ebdk@m($WeV)sE_E_ZZ z(Iuy&-^bMwG}1b9xp;?lk47lIxS`u*^6Z$Lvn*U(a@-P2m~XPJ5fkJdL+Zcu&`-6WT1#w)pec!2dcJFl^0t7lDsT$fQe~`l_ijO;%-|4}p=?|cnQ?kqL zB$86#QE0zm9Yda`KWv#gyy>2#!KH@r*k?@RmTBJ~cQ51Hg3)L$s>Elit7d!FbK)#^ z;^#v?6HyZRzd%bmt0^awm7B&9Kn)rsV)cad8=;D}C>LSL4BuCrJ|}z!jiMKoY?>27 zJFkFwhhMXrgS=4^CDu{&>||H!os`+6^D#%03FL20EHmY^xcAdhvfTT{He5Y4u|0)~ z!%`4Ueu;~ccdRefoR%NClaDbj7>g3{^Ve_^iSCP?7vY==Bq&~dd2;Anx)R-P)>T?R zKDpm9oJ#CH(?D})o@h12O+Hv^D}Ug=mAu$21)@&g zXRf_tL5J4wRzGJlF&Xu$6tCkaxN@9XOBFju5;_U=Pzl$%pQ}l2cs({0F!_;K^|J6q z3di|o&ct_zMTwSi@;UjjPc3!LHpw?~OmhBZwt0_f7>{u!f(Qgsw}v^4mn|kUhR3YUsK z(tXD!g#-JvG!?Vux6m=+_~H5Z8Lj%ZZ>r1g0pl!VZ20int`>=Um#CweCb`5f3cC-p zMPY+QjO}}M)uwaU9oUMd!u+nWT}&lArr)BtJk2YVd@E$VOKq*>`**EvGl{Zd5+CU& zOS$28fM~{d=q{?1qv#=jy;lDVmiQSfTZV|7h{P`h4~tAm{ikH}g>$au^6mHq8)7MV zTLmP@Bx62Y)t;?sVn3&fJ&*M#&sf35Je=oMF`364%t=&*{Qw1X_yn9D9iqegg++7r z=lAUz2U|GCOV8+Hh+h*koC`wUwcW@c6@F}w@q1Z$Y`T5E$I@0MORd70Zeo`5IX=~r z!V#Q{psMJ?Uz(TsOlb#eIsP&q!v|)@i-<*aa?@qW6+~B`$OZQmtLx4M2m;@@D61+y- zv#Ah@8|%}7Lutk>KR-%(`T)Kj2@sb`YC?C0ZZ%f_unx2D3cipZUC>~UEwVmgYLLAs zc+;-*k-wD;i?BBaiOUMnP5quMLEF0KVGTX@wM*2TQCIs2k03q-L9lU``59b8Gp6 zaMTpeTW0!`SlZhhxg^2pCe9TZT&RIQK@xq_8>?$~n<%e$@Mwx`6ieWUeujFJVqx-qLE`VS{_RWNajy~S0lRPvka=0%}ICD#QFr^}sakf_)iaiQl| z*6QZ*Td@+mNi)GI;bZxrIEXa`4fx-BEKKugJ?4EROC6bIVj{R)KvqVSSr( zbe);A9mHC?I0`tG&8=|FbgnNm+#$L2?k=NN5f_+!`BYdzdf12(SArRujUh`ZsR^^n zHGbND0w*c}hZL&kU=4&V90hoi;TVIdo;@retY(M>knky-Q^X`x1i4f^#J$(`{)1r?h!aZF@_WWH%R(_lM*s; z2m=4l&nSzCRW+d}QP%S(nZS$AO!p~4ps=~%`}V)AYShW7CI*dHA#@RO->#Tc!5(`gHQZmKDPLgPwAE5YTa*^#ZtV$Fw7Q7652z$`JE4DomBLh?pOx-P2eFu*Pa ztM7hpAsa;jM5$u~@lc3dsfaUf-mVU-E0dkJr$^om1RxS7D!R$rMXlN*v!Z+0MGFj5 z_M3YE5z1mqq7-1>yRd&#>NliL=RxO(3JA#O2a82a>!e_3Q45e!U&+m;jP<5)nEmmJ zRwgd{wy{-Fh3%xsU^6x5y*EF)QmxE?fc+UQ;Y(z47V3<8P^ECReq+Hs%&$TN zNDY7(p{G5K5uS5miucvrX;DwSLYJQtNQE#FcUGMR*mT}8c*G44?-*v1K2)?y>R)?-J*zAMA!P{ zb<&2c;0ovgRz>Z*$uf9v%_6F|uJ#O#SsNWHX1ISUSPmX*;JY$w$G63`%fb#p6EKE| ztLc8(N;|cJs|$0{%HFq$*|~UO8|feIu;vajNUd*xRk4ZC`t8_UXZiwj%maCKY^BxY zmsS-%4lj?wa(z831M?db8LF!EuU;p7Ar>H_dWMFEg2*;vz5b=RvsmPI)sDDaLmrWV zAMo49c1o;E{~AqtosAsf~pgRr<#eTnI*=swTn&f1Q*3^S>{4^%U(l{nI(NRsZo zW7dROkM$0$*OaPDYh#OU^(lw-9?xmkhFuRec8POEbhT9V7hr-R6&7VP?P0}urm}2{ z3Y3()3VE~6 zxtd>RUsUYrN_>JO4J+a0quJQ%d4)Y4BnEHH2}5_&NW%zNW3@(TWeqUz1%9MbFSDrF zs%HBy4mNySEo%G~11#&cWc2Id!ZxxDvztkm#HMKQMaPK|3 zDoSk+x1yasWQRWOQ$?Q+;?%X`=yMt-zwQ;I_TLm^lNwmND0h0;U98jgL22BH8C{Q~ zt90(0Vc4c{IhK;pK}RI6iRraSf)b3CzMgs->SWa(o6tX3z_2%q*cY2_Pq`<>`58&% zc+^{jr>lY(>`Uok%x8Ye3^OR1qJy|mG=zhB{K=07-~PhkNvD)%&DrGk*Y!6h{Y zM(JGcp~7+Q7S7a+wNxoxN)?Romc+O}_QRp8oZS980n+%s<~qc=PF(r6P0?PH9y-I2 zt(^Q!06n&Q*7}Oq{yO+oDIotGgNisi3E_smX(z)0CZ};RgHhTzcoc-+UqB%UMX{gvQIq-uakPJ zQe(ZIJY2JQf*wkyRiajaOC%6I>C!Rux%;zugz@slQvU!ss>M;3-d2wxh>2v_<>JiJNs2k{oKdP#2l|*&cV+uk&403T9k* zL_RtD>w6wUg;x?fE`o&X;qE*Q?QGEvI*69ls*xsS-%K<^GIui$pSE zrazp&7K3@v2H0CB5*38}wv1V+^%zqAWq@|7S5dt(;NyzULGlh0H*LYG#Go-KNDesm z8X+^o10WkgeXK#Ca2(?UICUaLxT%3NBWPbC3!q2@s`Z8*5zQ$Q0!nfK`zg|s3^)Xe zAz;3*2Urwv09})*)o{Z9)Cd2+5C7y|pw8g^N-|S%`#)tBZjS)Nu^r;BZek0L=RTVS zrYd6~+>_OJ;MnX)Og6IjcPRim=8v7&5THm*S%fVHdfZk)TY@K9_^(3X(X1cZj{tmw z^{n9T>AlVs5*r~W3f5Y@cs@u#8$}x&h0!@CYy3!|xPlBj#;1Uc=%A!&h#TS&2;z+- zb3VmfA1Ru4xyzm*#SkcAe{H$yqF~XYXZL}Vr`MW%gkX3PKE1Ls%+*Z-Wu~dt#Xx}8 zz)s&4JhtngtHXL<*eiaDbaWykO)EFg?Ex0-{(jN8`e&P{EIW4@2i-zlXadGNQKC|5pUeW?IieA0ucgKo;bn}}Of{@krTbaVi1&^YYYQx41*V_v4b4zJ5$B|eC zNFwKxF8z}iN`b|urgl@>e}VF(8H9oK7wWLstcJbT>Rt!%IbyeLQcK5YP8OQ&5r-Cw z36HL?*kc04cdPi|l(X+XnitGg#AUhA)^Uz8f!qy^X6d3jMI_N>G^zAK8F8ns?2gl1?b`qZMYVjIiRYE#>%}kK3GKgGzL}^@VO-inAPlfoszR z?`Lwl?xkz}^ADwRZ?IC5=LUTe%7@ct_};rceZU6h@1ByCGyeIQn)A`QA?;qly<)t? z5~Tii#~Jg841S>f zj254r`JLy-UNApT#A|Iyk_L=2BIIq9fvXJ9Gew=rK^pcLFHo)QK z6{Y*7e6oOh?sH)pl<-30v(kAz^A&!Fi8U>Rf(zsNMDD$5@^|5rqLP}Su=ZAyIXg1* zy&_#eJl8blCXUc+h}kxlE9SvH(w`atk@~yfbee;3A*Fi%`>dZLGDEZ4D@L_CQu&e! zPU?Fp{HcB%3dRSzlemqiEt%AT08@tm!l_Q02ewIg#J|F%>3eNsuDwIse)9 zr%Oarf`pFGf_;$aMH=$pA&HGtudI@5R zl@{qzak;8L&SRD1phB64i{G!xy2ik~yA{!kcTE~l0xdnX&TuX8F;TB-#j;kE^%T7_ zx_jN`+Uxp4=A|0IF1n?NZ3~%e9-?+#MBF3E0fo8=+F(_N+b<^AE;)9Y z78^r`-Bn>kwn(fZZ4AA$0vOpp?FE?l#3L{yNUsc3wYw@7A?W|DiJ&H^1n@4d%nlUH z|I>1?1IT-Rni!TgWF(+m8vH2%E*QB$258%yfLFVp2q!M70ab#w%tW>j1z;mxNf1Dy zZ6Ql^by-bFAlL7o((pePOShcyY)>R?|DRoBnbM2Ldh~ljDdIi z93hJc40jC%5;!5Sk&O)sq!-7-gW&k81n%H6?Vk=3#(n^20PX1hW2&-XH3E)k3M6nu zrR(aj!kt$Ivq_R=?V^*%PYc8(@oX2%Q*I5 zxg?IAEvSnD-hfN1Gf?jd%01;Zlsk^(^QAu^O*QP|>-x#@--7NSyL9=CYXQqq7>B!s z_sWp=rYXu&(F&zvVu#|zlq~u6)dthXg7>%&i5XYl$QCS^R$bc|R9M$s9r-$Jd*O#_ z)7sjpnFcd2BLeG44t?pRW^=>u?rM9(Qvo9i#W_*F-yNjD8Astgq;ofV6d!)4rjNdH z9#gn|YYa4kj9^g>g1~vnYfYGgjjvOn9pPSn&FkA&QtjwG`^-!l|+;)LPuS}($kcpVdCxUUWbjeE;;j?R!dfi=!C#1eG1EGGuXKM z{+if@o6)BcJXso5q}v8|U9y%rb~<-+hmoLJ+g$$%mnF(WA*Ab-75XRC?<(LaG0MQy%kf9S60sIa9~nIfc(}fy z%pbMfL*mLSNP@joYO&8q&y-8#E%%$UnItR9TNd1?-PcD%@zTlCjhM@O+!KoicrzMY zR^PQnu%RX3DSFR7N4#l+mUMESF4wkg@ExNd)0xoZ%)+w(@DI|=$}wj7R92@xOkQ@) z70g^=uD+E^U*AMkC|u}^S<7Tt-szzTjr+Iv^1>`Us_?_8)yJrX$I`ZvWLGNy_i(kN zIUDhz%)nMljgz!iNmLV!&ACS0H*iJ|;-O^zG_;EIz#jfdUHx`VYz(^Bi2nkPm9yzI zhGAHv!(Kssl$#Ig-Sd7=+3RojE}AVo>yFL+sfpF^?6l82d7{8p)JJ$5 z>xRCBeg6Rc_!v_P`uwX7bQ|xU>;)2Y(&N?r|BtA*jEbs%!-e!cAF#4xB{06s3cqx*l*cc>DB_xSGEEdoN`laFmW~B=|J=SiEMQD};U41_~ z#MgLGj;i-{?!|uQ9EK0PYmM^w@RL{|K{SQ0hiMZ$|J$mk0FfStiwmsq;h=_3<#hcKRBd=m5vMHX2%(DTLxt|@;lUs)688E z0gvw?v_YT{!(DD7FE)@lB5nY{23?UeVTHG#EUoU?uE3V?4_DEU+!kV$aKMsJIIvhY zs;)QrNyqm|?SPRe3!}%CvPqM?`U#exT?EL@*L=|Sioj6-g5yJ=jN|_* zBPYWFgXd4*fwdlTP(}aF+9VuscUFA$PtPGhIlTa+@s~?CLFCY1mH^WbLBS#}KWjMK za45MBn7`sGFM$TmAmqw=9&%E^i&`C|j+p2wZ0dkB8Jq9GMt`aV`jZWJ|NoPPfY<*2 z7w>{kf}nJhI|ZDT2zSN*rqutvIUk+}LR|nSix2Dw;P;HVN}wu$;wa>BRf)i;x_&kR zV-*!Nrz|bJL}-<&L?^kLiW=Pz@tC}s6xfKEL`Qw62v@aW|C&N^ z6kyEXm(W4&dz}U&Bvl6S0`Lq`lw~3f(BlKn3n_B@k~EX>5`B3Hl%P)a9~pv_E)n>T zA_Kt_WOYS23dJmBkW9e`(zHI8({BkGE-kp6w#d<0Ygg>U#pPaRVPe&H-zxO+@y9OO zXlutCd}!f5Nm*XwXhi&Op{Y~g^e;r&`tY5jhaGRjj99IP{ABLk3Nq^J~84Mjjv3f=4G^Uy4$ce%<4A zfMWRZXN8T(N97xoP&|>tVaD4lyD!M#K|N)KK5&ImJI}gA96(f_P6KIXBCe1zq~(< ztavP*3$kV~M3O8l?aoo!{l+9bPBZpgPcOQ&R~GOYJ)rOvqOWMTF8r!xLm&Ew9i<=R z#Jpd$Q3Y)MMtv!7l!Yf&2N^z~`rhUSP9EI-Fu*Oz__CV?_9B&-E-j^Nc zM8R8AViZHJ+rz9dR(_|f@!#RqG3-x0-c9{-;!NafRXR7ntr~+IGnu8^jt!597+_Y# zm5k>z&q1Om$?V1W{DgRmiVPVO$q~Oh<&0^+dN8=Jk5hiv!B$jcQT(Rrr^#ljKSX3U zhJ#Atxz?x-`>eT3N?5;vi8rML;|={1zqIc%`f+SvjJq%K!hjMLMLVMn)DvF`x4X(r zotV}9>}YJvF!&fHPoP`l^cr=Xcf*JPRv#(Vy+bDOX>t5mWBFCKFky!1CRW84PLGHa z^h*fHM*mc1Re5Wf9GDZ>`v8fLHWFp&7ojy-Kjr&ytKiUkUlH$Q9L7QSdrKOXQe7rO zq6Z-J7rZV&R(O?fQU><4r$i+)k2{LypA39(tNioaGPBf^dEH*I z+lXgQYmYu4Pnn}6Gk#}iNh8pC4y@Efmzxu>XTO%y!+u2}w0pFn{xrPOy1DO=s{F&=fTFFKtq55C?x#Wxvy5l)T z!<;v8O~+c@yOQVRE9?){)5nFnU%LD7SOkX3CztkLT=#PcxUGk{7Q=A3EZobkPLaPr#jNn8a? z<%<*KC0A#v&(3Ijy0A^-u8`1sz)+;n^ zDXfEaEH-*y;lJIM`|OST2mh(lPZ=Pf(`pNQG}}act@E2HW3sCm7b>P}NkM7(OtAIJ zb2sI5E1W5QIGJFMdOcW1V~d<}?Lu@9KJ~03-3_sBp`nDc$ky>DtZR2zP22-TEVcr9 z?o~?_=)KKwn$R#^%8_P<=vSTkf!Pqd?1T9eOVa{4g}bLkgtKj{ zWwEj?FiMH;+p-YFmuV3`6Ew&O;zP!5o{7$G68JMsOc2zm?riEv#7}eM4H5r$jNl7L z`fr&2TSh~|f%oJ)5ioGJ{$~e-5C1pw;Yaa719R#ulzy;iKo5xLS-J$*WH5{kjR`ay z2B^Z|Ye8{?yrLf9#-Jy`#sixEz#4Cxa940`Ka=Hw&{OjO@CyH?uNuZ5aG&4#cRs;~ z?m!9ukLAz-JVgfOvjO-^peHdk5^T0wv4&iw76GHg|Hd}>5Qw5L|5=}n5CL}7LO4tS zE?$5KH~^ox(N!IAbQzA!^9vIG-WdGL@)_KctWK15@O_AH-@AtUdn1r^W(9g~1Z4i5 zx@n7I;sm9G2M*TQsy2D&eO$;Ix;(&!3)8=+i3h45@^}yjKyP%&lxrV=03rpFVz*S;k$_2#TvBB#bXCb301%Wb;*C~MCYcSbvKQB!MB8!{?xbcIM z5dKY)f}rk{yxB|MwlI5I)^IrsYIEZ#*#c?3bNGcTX9rQdv;yo?Tm|K zaCdGPV}vitf#Ob>%S1_#=-_l-porkuT%oL+O^V2`=i_?X%2j?ua@RHFlnr(3RR=zQ zkTw~pv^^<;k5net(XR^W<(m_F`z4@dof}*HYC&{R@S65c+o((TodB3}KuZln&}ww| zfR*<}42gW?Dknom@D618(R`*ITo|(?Wp^VVMmJ=A?9iL3PC^1ULS_dKv7J^pCd{X1 zQ*H??U>HY4JqC?WHaoBTX)RLyex~k_Chk_EO*3{MX5YL`gCX7g%x7=S@5r$5abG1b zW(IUA8L;x;P(-4nji0^^IHgG!&yNUuHIx-nZks=Uhmf2l7JBj>++RO+Q+34NIk8~B zF;nTJGE;jh9Dv1I)q6$-ZN+R$Q;!lMDlJy2Hz zQ_s`xBYyc$pq6*e+9I0D@E0R|=oyLMivdzF=Pm8|M8@t%hy$6s^Yc--=()Z^(Jhxh zq3p&tKUA?wwox+Sg zYds@=e=T=uUhIo_{L&`8<}Kqb%I^@n-osY?vqg^kt|C5czYeLyM-~o+w!lqoL)TuR z{)~&R7gCOn_S6@-E1x}BcU^@6WMd>*QET{$pq_hzh61;5KE!5iA9pMy$k@6>JVZy8 zTE?QlZsVJ!Cwk@cEX4Lno^+YFsVcRB>zfaethU>qzl8s=C3VG4+<5`w4MdODa3i+W z)KTniE`az>;V zkn33g{(z+ZE-0wQ+bo=#tXVaJnUdxGY*0OA_KyS%6odsTwx?;sv^LB&;v{Dhxa+74 z%xCF3IL{*NLuQVq@4befc~B;2$}20&o<~hQZ`cQRVsQ-{Vgr#*LSQ##0h|7)0#7rK zF5sIBA5wgILqH3wCGsB(2VnpD!mP_0ntWRKJ;NsmsuL*S|L(e3aX!h@QO%19fDZlkgNed7D_zqB zsoY{jU1jP(0q}p(v^fmUG#s#*ht`qM^T<&4*jF8XuRI@QyLwQ+1Ja|yD1hInM za(aLf!Sx+>IRFRp^3ziXj1c)NFsv1z(nG=om~wf5kRaK1K;TmWp5kkwN}gPN0BV^Y z3sZyGH1C4SMn(W!?|~&(1qna^1fo&OJd2Jj8Z*@<+Jvo~+)&t#7w#`)#hl!YF<8geE)0?kaAc5> z7~u=gvKI4yWiGX~ATRzIvEiY;@l(*6T10PHGoXp;FgU>SKdtJHiOM3i2LxWE*n@E| z;DHFDbYKms`}q#gEA;`TdJ+zfyHP7IXSR*Q#b1OQ>s8o2<5^q%1!6Q4mj4JW#ojqT zNsXw={V{+c%NxwMoLlrzB)Yp7|ISO__V2?5LD7LC<^ctR;`;Mk^2j$`-1B;J&bP^Z z2=fQG1VdyW$go?BY>uz%V*Hu*;42X}1MR-sh}fV-R`1Ut|2(A+xmu(L+@vKpfhW5} zMXA2EXny%RmZbr2`A7IS;^~0nXyLqpvuK-|e_ObJzDY-Yg>?D?`Dygst_U)i8*7n; zKDs^%iBih6&%EFIxvo3Bq42n2h^``eAQD1#dsG1Bkqr?MD~uGI_g|#@3!s&Q0dM4X zBMn6Re*uT>pI3O+aQm=NZ#Zwr{2#8s-!I=7h7A06!+wNI`wP%#p)0RfVvDa%*(*QQ zd^judKTFz&KF%&c|2$w0J%ezUFVQ-j%Yt-<@)PHWvw}?Gy(dzZZRFnuj+DLfMt6zcXQ$aMZu8uGj_DtFf7tc3X zki|#I_UE7Hc}Oi%<(GOfm>rCtpeExy#+(%9U|M~4lItOON89i{4dYr3`M?@JX+q?) zo3OrC>y!g^+}2A(N`@~X&uo2NVC9cV#{8@1a{5PPit;oev3SbrVp)9sj#0n@?<)xh z%LcC?SThIX{(VsdDzEvWcARZ`Q?vlg9O}d=o)#(lTE-ezjk6G zVWp1`mlT4TtK1QR#Qj``?3&hdXM9gm4f>&v)eDP81n9M_1RG98qPW8ixD`-%3a7QC-fpiY)gTr53~4B}h_ z>vpWHyJ5(7<%{|{w>7gT7mT|#vm1mj255p+SxqutbsfoUi+|ObyC$JONBWkRr;^9r z+!S?S9g)>$CZ^#BC;XDZO9bmXL|D)8R`vZ2l0rH)hP>rxj|Qbv$iqm`ZLVPFLponZ zj|T`TQPed|=}U{I(!SXv<%=eJha00!ZXA{H>sJ2>>rfMhZ;Fndz&HFfZzV~sdQ_$Z znRi}*3Nz;y*%GE`f|T*Yos)ic>|zCQbK-|w4w&73Q*UXM`;^Y`{6;9i{A+bA8su`U z=b=c*yBfY-a{Ym&(W_0egnf8%jg!)UTEF-eW5|P!aM~YJABY5*kY%!6?tEY20vv6G zn@DZ7pq1J3kFEHnnde-FdO&!Jf~gsljjOrNf;L3-LkVLyH zh|og(3xr)!aDAvTG-NSwyLgFY)4cg)V3G<#dQkE?auLZ&qqV(z@@#Ax{e7+9az}62 z@5JG9SD?b|c99SKQ4Vt4AHapUUk!!Q621wWwuoWXab1EU$yb|Pgtzn;r|SlV5Aq(SMhpFgnfFIOAEg1ez$TWbSN5c%rzgJh#nPmpM#r2k9$M0&rSlIn0M&it z0+no2m5&N+e{O7TYotDD493_t$dSG!-lindG!_m>q}YR>VTshQq*x53tC^|<_9n*Z z5WwOMU2^P)Js7gF5=)F2?VRyo+u8Lh#qoBaruh4o%t@#)WYl@Hzs1Gb%c^Mc^#SFW z*1$$Uz#}H$PI)~TP3ie|P2qFZ3zz|Ix93f)7F6iW+2Fb{yp7X+L)czzOqNzy%Y1x0 z4TdbGD$Bza(MKcj7qAF;mVwe2B7SsYgk)_b*-%6RA~?2Ul3A$CoH5Q4cxcK$U~oZT z(;>LPKAxwTfFB?E_XyzF1@jpRQ`5jDwls6DO%mxFIcJ%@tB?D$VR@OkW5NJKC0dD~ zr3mD-5BNk;Yv6Qt++X}Hn!tc8PWI8Odn7R|bDi3F_~b*-Z)s{F@ngl;;)xDft52(( zu6LGA$5QJjkwqb4`Elr0>-JeuC>xT;p?|3QB1M7IfN`3(jn$rK#vmzCgAIu*>pz#m&E5BpFj;`ejZr1W zjydP`hudQb4D)4lxzeXiyynls0rAHgTjva+_4W#?6Fe*eCO-1Gm|K$$k!o+Tn`Dr7 zd?}HY)isTIE89MpSx}U3EG{k08_MpWw}Q-8(Qy;If&Z#N1&863ujKk|9VT)&Q8Zbr zflS|^riTR=k$*z#3AGszD~J7V^7`q89tKb8V8uJ^WYys9zkru_bgwpKE!!^V;KJHF zGN`L`>CM#8rz!MIQ4Ofld9Y-BdcFkcdASw>O(nVY0K&h(N)T!JpT$Igat3j9#67WQ zS-!hBVs|eq1t|0@+rR%bvG9KTgdSl4;hm8deDQ)1$Ac{VN{tBZ$6fhv+O1@jZuCUl z&4)=O8psbtC@SLRfWZg6h4L#>x9#@6w8pn(lrETYuSu9m6o7dOEYxG23^wN?+Z2wF zW0VK^)~LHdFfG(cKdqKZaR5csRZ7AmFXfGk>$}eXNGird`S-}NtGoltp0oYGE))dI zcxkserI>XG&(l1#R0scLVKq^DRhZ88cM!KOiOt=(8|K=NI6C|`3kaY#*t)k2m_KMj zN`?CEp=0&2!QsCE&w12>dGOUTAjg>TJz1>oS5shOpo9P4nHv??_oHx47uUq{H(42Vne3CV{J@Ks}PY%!+{|p z|9+bEQ6U1=7&wKM_@8)y@Bc_8tZy{OP9%QVv==-F#>V^4BojRCGJfjHjTctULrG?@ zWyZvz6zwvy-n+s+D21Y-dljB0U3V=|_c}O~ z!7Wlr;~Z#JJ4Cef`?+tNSUOhx3j%w!9pU#8fwUj%nrO2xW~$$#$mExJh9&bL6E;A? zQ*Ts`*#*R{r2578bIrY($meu1UN82>NPzt~Z%U#7)lPWOWcSNgdR@VXLP5U0uA^PQ zH8tDT(hDIEN!HzoPOD4U`RhW*%=GQdB$*-K{;%S1YW0gv*DDWXln**7Nbo%=YY!c? zZD3cfw8QF{(2*P{@fBEXzQ#t=^=rAgvDZ;_Mu5=ktuJMclQZ13%6HbJcP~m#b$%|As2F)f~2A z>=S~Zg80K+H(P${WH38qBG@kJOmrlPlA!8E{LW%_L%gU*=X)#hKcj#f_~r%8HfNVE zD{LZ7#G^YRhNkxma>+etBf}6yd|=25QCEIotJa){djsLAmmjkv23Sq_g*h5DLD-QH z{Oa3CuE-2iSS|bZ3^cwQyv_f7@9|ad1W$9LrO^V2QJPzhh|cH4 zitl;mxt!Ivx8;!*oZn~7dZ3teVQCBe)clp5cH3Oj`N|{(ZkMLTDl(V&n?{w25Be+k ztHf3zZTPWz3Zf0s3GKQ$doU-YfyF)`x|r3xQhMWCz?RX6z9;@&JiUXL-lbd?&?7*u zO8H^t@O!`0%?iJQ_4nIU<<`l3_+87ed?=MFs}Gd&7@X(hbOZ;&;lM_!?3AyP-VU3k z&oZC(%53bfNO;^C1;o`xQH=)SPaBmVZe_m%r1jUn2WOrUoC*n z9QXdcpO^Z9kP5O56vFO=4Cb^qt>Ub_N-ftlw2k|5m741FO+M_SOwCaJL}tc8S&5$# zh~O)uymJ#S*YCAOY5duw*}%%#LX0CrjxA%~>!bBKv%+BER%U5GF^2^FZXbu!R8K6X zbvF%xY<1oThLkWSBE11I=laMKMzH&Q)abbso2eO!v-_0O5!?)K0#@_ncv>mP-Jg2< z6^O46*u!r{i>^pgrRk3dQd^y31qNhv>E1N*+nLUJFRAMCn4~Dy+{Knu3V%ybT6Pi0 zk_jD`VCnXi#Qt{b>*t2`^;0Ov70iBXNsHUQ^QErreXpgZ)>g#BBuH2eRg{3z^g`Q#&@$s^hBBG%5q%;5imO?(}! zO7(NiuOaai-<6BUouXNt7i##-qEGpDj0+J0pb9Tw+9|r8ruvqdor$#zhM)I2Oq)H! z>}&+06m$3xiDO8xj;;^iTQzhsk7Ia1ND#s+>-$MOFxH}aJqq;t=-ca56j_wcb4uUc zVr6ZB4lMbyZ?EG#SJmxeFsk+R z%1EAx)0$m=o(lB%Ap+EsyvMHC8!Wjt4yzZN=G!u&^i9O>_piDZWA#(Ek7v_!4G#$A z=u;M6C8z{?e7p@VQn*xSBrAwJ;p2QONoXB25>6u9!}V5oS>HbtBj(#-XYFR0-$%F-|A(RLrjOKk3y8jC$G4-Df+ zi{IT0O5P<}&$^CF4t=zWnug3T*rI#5x_+!0rTT5~GVbj{93J;W?W2STFY~Hy{h>{_ z=EJ+UufVcuoX9Yh)i=_B{zg1C+;Ml5yeL2cPEd2pwsT%XvtA?VzQkP0Xd6!*oyzYu z1_oacPK9qZsm&S5s>eW>>WcKztIgRpM@N~m+ScLpKX?roXsf@!l?Bi-fkY8T^Yu>P*xb19|o$pt9cMaSxzTZwKxgb1ss*>^TIFpL3KmldU}1? zoZ6v_1mZg8b3XeGGCS*D;$~6fnu9`m#C5D6*v-`%y!P$_YiV=k2eD8K==1ceVtKV&+tUIDRDpe!-7 z7dR%2C%UgsQi|!y9!ucf{;+WuqCY8gE0lC^nBWO!czbs~A6H0~pmt8M zG`E5{Q`l6WI24(;*2k%~iRSeAjSM!({3#=Q3f-erV8z-|UTc%lqR2>yYE!Ui_3_e2 zb;afon$Gyn82gSs$V`@}*wR?nNu0YihD(f85_B7N>>~Q>sdm>>h0`yI zFuL54mV+Hzh$Fr-G$;a=@@uV#ksSkGMN+TC)HtmAPktjtlN=h#YDOHz;f27p;q4pUI5|ism_m5U0D??SE|PwS|PyjK5_7(Iaa3S(b>| zN68|ui+v;F2D{cG_q&TCwJKy10NFx2CQ5=15EKmn4kiy46v9|1HEgoc%p+d22XyS4 zOv6WU8+-`isB0I~n%b#vuWN6zzZmi&u}U95C&^tVxlxe-TW284JjM9-tRc2JiF7xB z`CQEHrTUEdB*)3sz%*?=PMr%&AHIt5WBGFx>V<2UYIHZ0b$~Ft$a*eA#4fqK!a^o- z=}mQAm^t)&m~Uq%iP!@UmlCq1q08QV!g#RR#EHY~q$88+cAW}7HSq#MY7=eHy(v_1 z6?Nr<(eh&ZaerOK#9g59k9EE=wB09j`3ToeI?@9X;T&5bR+^PvjkssJNVBh$0uyHF zozz)Fo6{-4Uf8JGl)VF+{p`xY_9@*`?=VeCe4HGy3KFoU2KR@i`jfr;oWX;czBCP6 z`gCGm$LnK4c&@d~BoUR)tw)cKJRWuDeMu)LtaNXin6Xdo&4mq#F{~V1gf{9rXj-BX zZC{>=49@cYu`(_Z=hg_IP&_Vg$hTli=speH;gh8-*+jnL?cV!|^Qm<}m8T(5%Q~M> zAvI5aIevREq$Umv_2YvYynpvIadeN?)WA(K2VGN)umU^32@VYhWq3}|rZNirShdCN z`;dDWSP?{4VHJGg){-IXSwRIhFDjcP~b5;V& zznMgVcoOwREHK0qI7$+hIl=aS4|4@Ymq!!z(KJxBcYxG&eii~U4;XiVSPZAvA_|K!1`AS<_h$^26i>?Wj@ZnHjYb8T;VOu6IByS&v)b3UOh z!pw12xjEZJg|m1_NqgQE{EOI;?TS87rl!dyjQc$m$3(Dp1fD=2p{b}*kH!?QovT~H zva}modj5vfFYIjEY%na)S_pHj#m$kx*4>er<2eT`q3@S8VypFxd7P*?wEe3V44 zMD-PQm&$<)_8TG6b04djN1}B@N8Ko-shkwe=ke-cZxUG}LG&BhxO|SmQ za=R7wM3!x0)QeZg^P&wNQB z_NE4+GNJ5K?M6C!PaaYfzn{#Od2}Kjl}@`xq>-?*>?h9>0_sfDF-K+r(@?`~3Ysq>-1Q*b(*A#EZ&-I?$R}TQPe>Sm{zG-{#(#vCOA1_1F%$|`d5F^r6vW0#=xrq8k8HL> z?68+JzNZ#{{YuM|`W0fLOp_~Chi$U8rS9SlBZJbVyr^KrWk605y3WeOPgkA{kkeyC z2>@fnKtYO7xCChI0Her2O6>pTQiJA9d?{x~M>qI|Wtt)KaLoy*g|T-aHUep4pc26` zuQ=+C-%F#1cdCiC+XYap0wjiAP}nRoPJ62?X&AYDoc{&W&5uhG@7jx)Il6a>VhnG8 zJ9%p0Gk$Ev#0ILX{rG8iL}n8<3#*lghaiEcXFCUKNOuF5t%WBy`^$Ruzks3ssedOV zi(R_d*4?|*(@v*4M&6F?S+kwLiy7*Nb%G*#Tjtruw1Heb^Y+vcerY7BA+ajadc7qy z1lb30IubKRRaM zwIU}$1{?Tt`0C%&xWnW{aoy_^ofZ~bI9>>(sMY-g4?n|R1cLz4f`+;OqM(xh)k{SU zuV>n};=%gK?w_+ZF1Z&{5)>2E7#Ym%Bg)QpJZSE9u`R_{ta)g_&efp!I(4#$l;341 zak|)xz%VLFLGmEFpcwzJa_|Ugi)ZCq+TNJi=-X|aq4d#hF1>t0RDX29Pc0ExoFG^( z!z)HVNktxd68yOYtjAnooqZt%rphoOtvg!y(G)m$yzebazcY1~7`r6csq&oCs%dPk znk%REGeTvXBsQiuOFjN@aHda(D2U;;p8{Ojh0zBu=w^qG(=b^?${`MtprISXdAs(x zHgqsoJ&(7hJnIndDn;hC1urOzyv5JBHAusf5fA|tgEf~@lO@gR@baQ$WRWRn zcD%=XOBG*eri|%(G=)(&W^m&4evu=|V_+cGSdi!x98=ChE;h{#*GQpDq5bZLQ8Xmxv6uLD#5WleA!oL=4i>cQO|5E%Dj49sbQz@kfn#J(y8UZLEhOQ2-xu zD8RcdRQRoa5j|a2w#i!Ud1qzrK}ev(zSdxeuLt+?3+-q3t9(#QxMe(-Y)#1(tq7vM zho@|_xx3@mFTaAhQZPY;HRJRWh$Rmwj;#>;(>D=%r2V!wNv&!6B|omjS2Gi|faV{~ zowNc222=1=$OOLzHpM}oi)l~Al9GLPa)hZM{#}P{8jlZ>P7^HxU=3eF=dGm_TW*uG z-DGw71&mbAmjghIQL5utzDUQnAE<$2&4O zJN5Kx|C?aYcQ7%>gFl-BOWtban@nOu$9I>Nk&!r%G}t1Ke%IgJ-ZUpC53XTp zJz^TRe*wE@WNN|%ZVcI?A*|(_wLnO1j=l{}kF#UJlIUa@hO1X?`xrR}0sZglsI|{L zQcrm?^cC@D1PN_swy2?Z68oV!n^b2nCCF?rR1@aITZwGfAoYI8SvO^_1qbRej=Ut^ zE)9ECmAk?IkKRxoj=DumwfThu_w%JUa`f~n!?CLw*C`!jlS*&T+sC}`E`~Vvj-&yx zNo>qp?^jVHdO9+@toV7xnV)48Ni8Eytl)OgwPFbG*1wt=n6E(+_g-U*WO#!1;-{bC zr&g_TE2FG#65$5rk@J@ck|%M%&!}@lg69yk;uiGIaWwcJki{4APuIjNqG2q@DZoxd zPalrdh9jn9h|1K8+#4l6)?C`rfI)g)d2*pRcr4*LH*)75(mu2=(!`poP~<9nNMx>E z%ICJ4MMh48o(qEog^OWa+L+-b$`LMQ=dHNnjJY}-OIls6JZ>2Q5Xc99c(=B|KV)!YG2wd#0KNL z3|RaV8yI<;b$<_)cV4$5g_~bc?s!J++wxGmw#!=l*0B=4R1ANMg~)uHsre1vA7mj0 z4RzbQMfwhysad8_NwTMa~jolPgb| z3{*d(PbER7u9BJ!n3GO3c09QJJqt_+S5gs9|~r1^LSB zc7^q6hKG6y6PyB?TxAOBh0k8h@wX{QQvGrZ8c!%Zkj=)#>JOgS-BV$d+!(V=9Sn?X zaw16!!-FC><-;84wpGhgx$jJ#2Wr~w_6+zej9rFM-y0*uHad)h@eL#see0NLEOK?X z*(Uh>G?%Bvj))n#4>wiXJUnMlf?T)efc3+%zHh!XwER2BYAB&*=*_OPvpqhn>)T*6D zhkO)~&0g5o3KTSN@o7e3b?<{q>|B!5xFm7PjkqcIyf&n}ZFtaVZwqAnXg@jmDo$EO z_b=eX!mdnv5)riqT9NmN-pvwe*maeT*r)$_O3n!_qFTnRdrcK(i+o=Bg_er1((3ko z;hkjwAXtWzIHJWbnMf}F&Nu`2BEyY(U?B;sQx#jR)5=Q+?@=t1=;O&Wk4R_?3d?+22J z){>~ z{}oX)K$@HpnYbeU`NcP?yR#YWr)I@xe2BLG&*?|$^<$+0r6~th`g*s>T{ymkMF1fp zz0*vPt#QwG{PESjzm!RF^n*(($a#ai)!OV4S?{dYA$_l-9r^5=_ceE*AVCIIV6AA4 z^14jKvA}GnZu+~=@BclbM24?zEN<&8MOQyKjGh-p1|McOc(@sN(K=t6x5^6H%j3-&F%8Z!#qXU%w8U)`|u75d8CrO|r73S*OW_xDW(Ig40!=YSheCrO0<)J^OB}j>F1>ekb|NvxRH}CO^E|}5p8~T#YK41 zOwBL}$?Gm2jGkFXhVw^;H?Y{GckWcS({ySj2a)NC>mBEGekXzLoD!NSS%~le7C<=- z88!6O>ertt)>$&x|MR}2lD-b=6@WCe5=BA`<&10Y`*el!f;kIM2_O-DI;h`R7R?(n zJ&6Bq@(4*uG zX_eNkNDo3ZA@hheHF)j~`i(BaXfrx-o8XIUGar{7-qfv^xbs%1ZN;LV6qq`*b^6gS zqOolWM{-znb6R7ncO5!=C=01IA^r1}k-nm(zpZhHX_>irINyFf{=zNyzUboh)|=dI zK?*3r3BqnU%hz#^GsTAH62DrlbNZ2sAv@NF2J?0IFLXTF_=r|?Yk2LY(;>wJmQLqrLPgeJ`eS`Ou{C z7Z}$4;WtAOqP54^gkL|&yNWY^>%x&j_DcV7YTjPFS?Xl;(rm)cs^=bUuYm(Up0{tg zhlIUC8Jjhv)CV#lYa-n_HuXqgk+-TKnig)q6Eiq_DMj|xQqVq1!U;Q@f#BC|i1M5- zYl_z-Z|f%J%$55^*odo_^Ps#X#-1|qiOoE=7U68k>#F|T-$airffEkCiW^L3KN~yv zS~S4sUQU^*mC!L#S%HP{+f8c{@(7K2tX9qEvAXKSi^HR;(3ph8b=k!7NBY)Hf&Edh z)94dk_zgq2F~+>M2CF#t+Hcle@<@Df`VpAFw>F~<)FmR7v32$q9yfJbXONoR9J)fN zCE?u=Uy?IXB-gB5+eyK$b?bM}jq1q>lQ1ETQ46c-MZKI{L;W|K5db0v)&nj6@{%d^ z^{I$(;RjorlBM>6bjszD_`%r65L(7E{L48}pV@WxT4dZK9%O^zxTl zJA6(nKbU)ncg4pVZ)u>+Hk&Gh&XKA)K53uNI!VkJo??14XUk%^05C#FM$x6CtoL!pVr^EMcW}y@`i{ z>u?yA=i`Mz?O@*vk*%#-_;FFYj+`|s8zGB|$b!XvErkZMPq>xE1Fk3DDH8+U>gQzBO;! zf?MCHHx+MZK?*Kx3uE%1) zs=^u`A4rDJCzG*(9(SsvBAkOAP;)m}uHj(!o*JVH7;w1>svLeW#LotP#GK3cp6zG- zs)y+j3zKunnj{xouQZ1M;yxKAYaB323Cyby*F!~PWE8r}#YYTL_hP_D>~t#(2!GQQ znWjDj7g#~u2QRw+O(F*hWBeCPjvn@(e)KPdLb>WxXlVFbsz)IRicS-GTE0eE^d{uG{P8AzLAe zBkkjgTNjJQHYbtWJd(gY#x?mU%@8Z{`#_e`yU%q5uHlrd)v7lf1dDtLv`(L4@cKU= zYO!Ok*7&e}wXKKni~ehiG>N?B}w(8!d4k+@7<0a%2w6)w3FI| zY=n$${3FL+QlPzjpK77H^K8VvqMZIR>j$v-0Z~cg?3YH2?3QHN>d{X^-RLVeLkAhx zh8sk)Xo(5@C&?@e*Y#2NhwYi`#bBvfVY%7g$_=N<{eCHBnCg8uZOi+w4T|>c*~MJ{ z0ziTW_QOE{nV!tI0_((4yMk^SZPN=p1>x$0vC+8R`;;L{_WCbhr;i-VbauteF=-;a z+w`YDe?t6Gnswl9S|>)<7QTXP+KnC>F+}MlW<-G&@fVaZQb0=+KfgMA&V3m?r4q13 zq3($#R^nuh{u=h_Wp3H!IA=?Hn=}1r4`zW3Jb<`sTeL1~qF>~}E=xP@DiLy>N3Mb_ z=$CC!erS77&{^N=Br?D3A?X~IVS0^$44FfoqlOq#z&8&^D)*)O=H_<4dj1P648&iL zjegoH!JVh~hMGh~r!aDv1deV6=%dzErt~ zLP6&QO9JJVYH|3$ww^Da~yD zcDC-2H17KHjS6io`V_;-t0EBkQeRlQ)$aN$Voz3N{sRAZ-`HDg}&Tc}vjhcRPek)Hr(m|)OD*IovOfyR$|s%g_Ytj~ zT@fVqW$Q3{G`Ev~^5vf&u7@c%$1mQ8yq&cFRBHN8ueUl_RG&irKEs+|3|m{lA5F(b zC?l0^1XlkSNcCVoiBtHR$bguKCOEV}6oJLL&@S~VfIRvu&*CJzh*a52g(vAo;>CCb z^ydg5 zL3RRT1zXN*c7@zCe8`R7iKh`vUuVS$keZ>9Ow)apxmIsNI)QW!{^xdE? zm07hW^CIEzT)ql2&Ld;)Ia$NOvaCS(B0fh)vcRVL_u?X33X;bK?w=AO(_W^{h_p?p z5ZdQhs~a)|Xct8-p@(OVD-E(c>%1k97M~FWM)G}9f&X0h#ajgc3hk*?;9#H+&=}N1 zA%WEZ7*|5-bgF-VQx5Do0`Tt*0=N(Vg{|Eka@5Qx+YvtO~H6zzv7hNvdB3W5+FOj_p7oqH# zGHWM)La)93s6{eJ)Wi-&XWdEfJXy2m&NqutXIj9&-$J2$B_N1fu+& zD1vQi6iaG`l7Olbtn$waC{5k{F2^TA#?Q-g zWyt9Y?flfQ7w%J}(%d2S^@IZC0?ti=^l!#3f-m`s>CHz2hu>1x$|gVA!XV-I_Y=Yd zKi?Cr)|Y(YzvGlJaKs)AQMdWf6MN*%%|pFnmw%97zG@nxob>uSSh5ei#;7640Jz-@ ze*Vtd<0wdB7+qc>!zXWfEO;yB=k?qEN#a+vay(1q`;B}M?s;cWqEcDQw(f&78^!O~ zI5Pwm4TrILvWez+pb8f9|i9ex%R8rx9y8O>^c@O!1a{ z${spxF&THBoR3b_=lHp#>OA_^fwAz6xrM5&+M4qq$p0C9b~ezB4rZe%&GA zan->m*1tY~XyG}a%6*i;Tyuw3ensKWTmMV_Z9YThAfo3x)!i>~hu6D9V+OP2+H)HR z8=u<^2)kRg+F(;v$A@N;bZ3xWsG)R!!GNk3NO>L(S zo5_ejXLi>FRv)j4d)g!H3Kk4unY;0}kt30Gc>!g~%?NOOq?!k0eMD)+-=ajYTtE>2 ze^ExXG!`6}T}+nt!F0qYlFg8zVW=5Q?C=)oUnUR|OqLKP=P8jUG9V0{BDsug0^Sbi z0-8k%hj>DIda{zqkhz6!zhs)&9YLU9p3yFPwaa#!tOstrqan5k?VFc40>(bewpB+|adeJyuGUL=U z>zL2DUL|5#sV0Zm(pSi}{46~9Mdry@+lVXO0bfCu0P^KwkET!lX!*&(sOYI#YhF57 zLXichMk3-MM>S0m?Z|{b;~CesJ=fhbUwyqRVkkp89|5(({B26vmj zb=NXqS9|Gk>H4qyrntG2B=>pkuxH&qJPR}W#Md>PmmYB_()R09F3YZiDLgQncD$}0 zF!)NUl+|IQDa|S2FCcJl;y7ojo{Uyg-ri@B|3F9hVOF)xuVwK-7#W(1V)lBcH|pOY zT5bcj8|$Y$KDDX*v-%5!(Rg72iuvR{U{w=?v>j&P_+aww#m76SM2pJPFyco zJHME}VDs@3Eq?bXb@0RdA+aC(TA1gt1Jf2~lTjRn}Lwa)MHd?lSa8+NzWG~FAnrl$(|FV=wxi=|oZ%ez!W`yxpmRYZw zBC9`%qQHh=6dK}(Xjdk~0zVU^T(34)Kl-Mt$Ch9qzn)kl%`Gf%(*z;WA!NlITIez& zGZHhV)-aSRoFL)hS$6igbx7nT5v!a>(-kzTK=!f$v5y_zu#rBaLa!*Dkm&7X~oc-+1F zgu>TODVa^_V=d(aJ|YP|V{bk2?jQj~p#K)z^owjsMz|bj#DCgmf5P7K^d>E9BdtR$=CL4%L!$aN^*Ts2%lt2_ z0a;B>67-+#9pwAcVWAvNga(Mn=3&JyhQ$li$or^n8<6M+EJrO`*$cl0X-)OF=v}1L z+!f%O;nPk$9={WxBR5)f{k6PGO?5{qm2jc-y~0Zx3ffcSMCIyz9T#|ol|NJa)}x3_3W8fkUceS9~!f}!Ey3yY@ky{vw|%;!GMZ2a5g zyujz7VcnlUKN|?SJ)Zu^+sTz4Lyu9iS9gGbqyhb6&|!AXA0mR)VHAZ(O7o&1d(ox; zvux0+5iVl^6(yw=9JpL}R}4V62kf_x!9|c^Z9FjAD6ofy{2kt3KiJUMMUpuX^tAT* zhosy{t~J9Yr8S&{M!BXAW({LS;95-0TYS7O9}n%sLK(+?(SSCu@M(V%)keAVXen?x zvl3~d@=zawX7dr%%E9n=s*ZO36R|;C!K4JKD9Q_A!oeO-bx{~}h*IyD3gm0(ZTUV3{lS*VdI}P?*QJ9;2A9DsetQ3JQ?M$8 zYSPiINH>eu9YqhYey|LxY7MW7Hf|SwCR8h97#-!9u~~bts)nLtr+PL2@VdV;jWMCd z8EmLXj%6EnD9B53hiRvJdUVUwtoXAmjfhuFoN-5evDf6mU(`;!$+B7z5iz%c;X61_ z1s>p|S|=&}(s?Dlte12?3-(lj=)q%HV<3zkJkJMjze$f6pInuRXuqYgXKoj6<%3Dq zJ!2iB7k9c*7MF75;i>Ap>jG^(fFtLLN-o<&(RoBU8`HH`Ofz_<;fkd(5>Db?CH)2T zsa!he3+Zi}dgZPvs7Lv0jF*6Qa5bc>o&L9BlK_G#^vs18Ajcnnh}h>~);OstKmMYs zDlq%8fmOG3qBp76B}qDhjpSx2-9ziQSA#x;brthspW7^)1sh7-ERsgB1m+rfYt|^7 zoeSg9-K>$L9y(E|{|jt`gZ1(AUpjZR{M@QnQ}1%Ue|s!v4fZsIxsJZZ2i#3~J|Hsa z5fJ$Th{xGw##$UjA6@N(dvkf^hpZ$=&Ij)vM4aoL-Q$oa_iFGwrTMcA^Zjl=yy?`` z9QD&)rO!ZGK{4%%aLhtl=TfPoeBpw>%=~YIW*MHoMQdd?#U4r*m0-nQ1Z3ba`-5^% zoT}z&!H+(QdQD&M{@$yXX^mR!!*#fS^FBGgj+1&Fa6YzBf3$|1f=_CNoEn-^AOv<+ ze%@}Xzr$Ex5p&x8?bh%&?&J-G?eSPtu@ep%0h0zXp9X^j2wpY8{ z?{i2?<(nE@`A3apW53EWEqC@@$n$*e{VU8VPY#NbDia;GHic8FjjMxGEZTPY?`I8sVs6RbqRf0-ad=L6G z2y47h0KLZ*cNx z{!vPjieR={p_$9IWBd6gDX+bIaM;yF>gM1yjRYM}2c)nR@O)-}_+gW4DgLS*`y@68 zor!p~c|}R*)@q&T57ls za)$#dB3Dx(h8{Lv_-{rYibVjs4WPh1xcf3uQH|>C%RBO_HE+cntImw-ZOr1JLBrjM z86|^&+0f5hEWsMa+j2Ac1JfxSUvpW+{0k&vcfOzIcZ4@UZgyI}V%7GW&&SdSS+^a< zCylGznn!FUt2=LcG%0fas{kP&4Db4Qs)jheGq>EVFi#mT17s#cc^QDeLN|y-cmhyX=mNT%+QK@pdl{g(~^wq4xYH=;;Hn%*I{uPd!OPl`J{8=m7goLs+IUYMlw z@7q@_n)T&nkGu9|@g|2y`T*~Nvzsf=BUI7J(QOD+42#4>n)Y6n$q;y=#9mC=U*fS zI<=d|W3Kor+i9_soNH!I$296I>K%DO)p8Jw1pr6`oKT{w@1O=x@NGNWD`j?c<<5f9ig+aC=jNm`Fdnz&YQZp`Ho(Vg2E4wYzdj{YvfXWvj^^#KB^RSV=D?C@sH} zsdKWf#+YJ%Wi6TrO@vH%qhJ}$)9>x2Jb$~PH%~bz+gX&HQ zXk#>LJz4%`9gwQItoi(i#8jwy=gzaM^B&a~jTwZl^i{uCaz3dzK)b`0Bw9H4SL|2) zKQe9bZTPU9zUbdk{XW05+jeIZO*w>=0Y|St*QkaRX-Y|@tve*ve6f1{w8^PTQ)rNL zqeo4As-@JsW!_qZYJWAE5Hke% zJiATLd-?U)Y}EXM@yI@PjlNiYeMs-KZzesT;7Xlci9YrxU*e@WRC7C1st9Kn-Xz!> zUudjI`Xl4E`Le}Q`)oq1h3_RR$ zLI_Fash4HQUtaKrjxx_*Zw*U&`@m`R&sCq();o(|*}qO3Q*GYwazSU-1;G+x*shkEurhO1bu z0`d+K^>|&;Ww;ySh$l&CtF^cHQ*S!oxOTaj5eD@8mcyHc_z&cy{4iDqyC8!^QUvV( zOm3hC<55!DNCtSYYnjh6LP1S*l*ho>ZO$~g0dg^vlK=m5M~jF&$%Np4=Ri>+X?7T- zo#cIyT;My$M}HW)bS)7k4OI*Z`z|wxD9B?hpD?REe;nIZOt`{ec0gzL-QQ5~sRqW# zV?c;K;#Z2b&GLg=gvf{(S`>_kFL$M zh4$xWR`yWTiYMb?ueyEDxbLvpMpmqi*pw(RZ`t7H1?Gf*YGwxw#GHJ;WSD077ORg<3^1~*cL|C8EnGJ`VYgtS*C)WFtrxM0#Usw(am z<|vL8-APw4eobHVOnx8qE3Xp(f$OSy^rr+SO*{!ri#lhP21fjMeMR>me0Be5=s^w>xj!6BLW~(rs+{CL&u7~ zqAD4h=!Qt@C0;ts!fzP?PFOMTYub@tSG&Hk)kX<4Ru{4FUec<^?({OfnGJ8p^9)v6 zm}>FUk^m41#@JzYilR%IiRId_39_#nufaBs#{@pwwXT;*wva@0W^>;pP&ikpRd8j@ zwO{_Qq}MhgM;E*lOLx+^^TVVcra$Y2=bmpVpl-$aq^40anEXFfsoIO|{bh(pptx3AyhD<3Ei_8p{4Y#1po zlu`Tn_$WC-79;QBo-yE2Ix{_$b;Bu2SL>ZO{WF}fi%Yni-2U6iDk;hAsyb!Avyn6m z;#mF*2M(8|>a|QE1@AV2j z(PSsToxWn8%2`8KQ1ZJ2@M3tJ!PL zWg)-yyCa*EIpk+6gcof+Hxi>0xAi;xyXw$e%!HiAy>Vp^xt^~*TOwGIbJO>tjAB}r zWs~=;7$$qi!JIkcu$Py%3T_P{qjPzu_j>?`BA4Q(H(aDp9OLe+O8bLVf7`y{jo0%E z`E0JAo>f})Ouv@`R8mf!6|dOBxh{$*}q+Ib%*BfX9ua~%E27TuwrSp-;JGHiUZ zoixC**)v8z7OMMjL(XGt^lV72Ve8I8i5^)Le>Kx`HqWhwi9eu>D)f-)T^--vt(I3j zkLW5FOw_V&!hTh)eSerE9GbBbYQVB8d!=gR2rFbU;oZSSyD2n1A8#vcdYbQ*?;>EWSg7d66Gugq-uy#TK10u-d+q)#Q~HiJ*;&yk zq^8(ltXeLCb6*@fK+!TWSV{-@q;kTdCw?O!eb$FI@0AV2sZw6(FPEfd4@=B0W$a3Q zIW-I-nZW7q89rb3dbq$X96$5GGy;EPmSXADow-#yECjh<^9?tGpA;Qa#?ij4?d9L6 z04sV`>8SL}T%1r&WE7JYgZCNtchZ`IJ!59wt7|9r1vVGY)jQ4)YW&RbU)oH=@O=J9 z)PiuL^T<$Eu;c|ZL_l)O@sr`%YCpTvJ@2l(K1%k>7D9$HxsUOHxtD(g1aKO|G|$w5 z*666H1JrAQd-H!y6Pe~OfugKbIxL{@0%V@K%IF)rOTbWqSwwvl(4$1!#y~zW2q>zK zLO{IOJu>t^1{$yilFDROu43qLp(ZK<1QMBDi*px*S$zgSs1{-8*m#cCMYO8ub95kd0|BA^OIt8ep}|G zi9u=OC2L_8JS?U5SHcPhEjr9{bG%n?F)Zb`?-ex*x=I)GWu|Z$yCfhaxqeQ~*>d|~ z;SWO*AHs#sX$2MQ>{}^bOq34>@6zs54uV6s4o=Y+4f+ejvBi4_VKkA<^?ZES^quuJz3)lAklz{3v3ADgr2Rw|e_u*1_|>U>YDj-DSZ| zJ1z;fuV3Dky0^^#%5h25tPR<_5wmm=rh5Gr26-)s z^XLJbBphj1js_ac0Fa+ySN51LqbUo_d6~?OJod}El77uG1xj4h0I8~C1cy*_C+A89 z`uY3Ube52`L0o0#+{x0SdK;QTm?daQsS~zj&FFm$qS~njVHzCMJ)b*T%6_ha#H!!H zUrxz}r%E(iKwj&mR2}gwb%idb#!GYl(V~X#$ z!>((!$aADS4^Rem$}5M|{qRdx5M|hhv5lI5W#6F(Rc{P!a0J~y`*mq9wxOB6$?C-O zYZb6;kS+d;V#eqoSW&}09tUV#H({FfD08_!*WtJ+3no`GM*=*q&V-&_Rhlgr$9aCe z05V8jxId3M8~}VD8eXW%&8_XW?iw{cK5o#*4ZT4@zr84Du+eK9Rq*66d94!XIh%OK z7XOX@*I`cva{O>va#Q3%IBwR2Qsol!YM{L7jn!7{TwBZ)xApCZf~gfBeuPbcwYbJ8 zv7JMERIFqpgO&Ot>u(b#&aCoaHh z%K!Z7FAx+o4(Bs+9ssmjjUV{m@M3*8KtXizcAIZt9oh@F!Rp}L&=l889U$&c8 zBj9;)!8gyBg{PE{h!=hx#QOU;H@9YQD13Pt@=A7Ns-;z|Vu`kxDE7?eaGo_IsC-aO z^`X$kZ5D+c662c|*1HS1HwXE*E+=nKx4t^v8D7{Y(bOFGeB&Bu4{o_IwFy8ByvJxPo-t=VO4;qj>oc8^}W<| z-SeJ|?LmT3#)@lG;}2e&mQQrDRex?G<+7P@G^Xt~I17^miFKA!@8p%s;s_cKs^JHj z^8M*XB|XykvC8)kuFWp2L|gOydVFYGzO}`|CX}Y!V9RGRYu22imVDQkRg7Bk+hFTuk&D6O5*&N2 z5>V=%;}#P>Le`$AHof#S(XqhlD*8T)PqqZ4SBZD}di`psSH?V@rpMi;YxFM#k_~H; z%LURhf8zELn<{xKf8-RT!wSG0fJD>5mZ!&$uG-r=J;yWUET?2@Me`xH<%f>9Z)?=w zV(@_&OWyxHq%Gk|tjPQ#D z#jW`D4RC;X@Z*X6_#~>j_F7j3^xTZ5t;5pWYBU@^Tb?dyv}8oywBE%uJ&^FlL1^T+ z_Ptvp+o7lPeuGMD_Nfn%IH4E|wqV*~fXP8Dl`Vir{+(r`honA5b7ZYP?($`2zKZ9p z@7O*uc}mZKQ+lzuUUe|O*-45pJnJctyx4jD72)|bQ~HF_NLT7#fV*FK=4S;`lvAaa zGE^t)=}M^Z9o{EiwMTJM?=DOq*CXrsDl6#<_{vwpS!|Rf8YPXw*yr(G-zTR-?iOIA zH(y+q5l9X(kG_dfHSN_*gE5ctBf|p}=n4WVgU?TyepXcpaRd=dZYyc)F1}aM9Pk*c^05qS2L zpL&5w{NUi6kEL$o;tKg+kb!uV%b@sF} z^_$Qeg7aUX(B}3#Cfm6ODpTK6nMwiIAs^ni3q0zEmoBjIbMgUrAqUU|h@1-NmW2a1)_{mAD}s*t4e)ay$e?p;;RSkn z8^h=%`w#rRYXGXub1NV-gqFnw;^T6&W}2%;9g>pl5Yzw%P>i;vJRZitc`E+l#EnpB zAfIqKlnfcI26LAGPF)Zt-n~SuH1{~jGHof_Cs7Vu)CXj-02fyo8Cs~X51l_T$BWHj z1-0S!v0}6=O5$kbsc?|IB#EOaMnJf`krsq4V`a14Snrfdm58OcaG2U68=7fVG~Fw`6cc>3#!uAt>)Y=08p zw}y@FITdXw_tyRrk5??3pMI3vZ0bGXQ&bRDYsMO>6X}RAdruW@(&61Lp8&ZaGKcMF z&uvYps~em7)?k;eqEzNpG|yViRWyq*!yuhAwM5 z0v9JK&qM|CL7E_&cpm!BJ4|#*t)e#8Q?2srMvepw$4!tuBy+GIr6+FdzsK^FHRP!RtDp>i4{4JcUN9eA)+`a8o{I%@hN?~ZE3OUvM5 zXAIA?yQuSXeZT6aEyStLA```Dq; zx<;3B3h<4$MuEk6y9QA5!7s>QEwZ=k8Tsd+wkEd}BGIo_Ja(<^+AvF zq7E=+GjTq%BzM4dqw|dOPjkferMVCHDZ{odX_Sn^TLzi zqn8Q9;}j4}?*;x{3s@umuQ^9FF$@KHKVLiqdSMrmi_#xc<4LeJEUcL~bNU#3+K7M> zf&cv?sN7TZF<<~uzF@HZujp(BK?yI#W38<9X=)qHUy60=qN^U(n!yV2bd-{mR)K^D z7MwIeX60XG7KmWQ%qLJj3>|@iE52l@5s>oH!**C>P|ZxG=!x#463}9&q&j+TlL^sJ z`;!AXUlYiSxmqf(1YB`Io)BEc_rOGozwK-GnoAjF9P$_ub4#}Oi{ zr=p1peUD;!7FDE`YjBUd51d4zf@Bz-ytM`b6&Hzm2+C;T9Z_&{3$qU5;Pz|pnunS< zzwp7sOT}Z>)pMKu%YCMU6bA$p%YCaf0(2M*`^Xuws^<7+cO0CD5LsvZ*iyI*>Zh0G zIA#Wv2p^E(qYAVX${E!Z$-m23JIgF$hDvpFXj?=G(UCbKYKt*aD&Y?hNg&Q@md(!RLOPss$z#MnyYXJerWjfy6fzkPE*bCz z{NH1?7M&rQh-F};;MVr-D2Ox_Rt52xn&Bcm9-|_d0VT33mOEAQv&Wk`Ak4|ocsZab zpG=9cBj(0P4eNJE?-2H^s8`<8<^q$-IZ_a#ZgvPTph$0C%K$!^`FtXtv)rtfInOme zc|O^MtHDh$KXN1k@_5Vm|6PFR|8jBwsuuy62hjon6dLP7^#YPPLWpn<`^qc z(U@r7OcQp%2P@#NA*4pgq$>VRdkb3YmN$&Cy4j}5o5|+LsA4uxtou8@pWMhk4;o3o zieT1E-4HBBjMytQ{CO^BbQ!KsLBgS7PNG8fh zip~K@C2GESkP6SBC!3RGm7{&ZvWJZqFUoZCA>)O<%S0e`nb#iaIju|qp{;%-D#U6> zsmzr~4%5YKs>FLjoD9ZvlN%;(q=QW%a>8HJ$NWF&84ThX9T%SzjM)b9<+%D_yB3l} z2=0*Vp6aNJe3W$aM;8M zYZ|sAOgBf0B470WVtz+@@epUJw{S&8GmD==U-4u3?teeWTow}|FX%Q8dkdGPyJujO zb91;c%Ec9Jkdw+xJK}mOp30aF@AC`d^-!s~!UvA}~W0~izpiO8^K@FPYPfMPa%ux?@41=@B&G>g*+ z*yz>Jh1)W4e%LWz+=m9;yE$H1iUjb(p%huwbfhUOz>9&f4aVLwf09z1YlafZw##J% z#JC!+8H6(Rma$+V{c+~7P&462IW*4QfO2Y~{w=HErR1?rP5FyuKhkza9h4s%FPWL5 zDxw3?^YJZGcg+b6?7{=G=oai#u{<9GI669uWhw8Pe_!pdckO5?JRQp@mLClGM45O* z8|YJ!{$;*HbXgK=Qc07>tm}Z#(k^W)+&z^!{HTZ^t>bv@kKaQ)v7J@)L ugp}PIyMNUb>^4>WcgwZ~nVDdYNrI9cazx>0@C--et_+0n3zOl0zy2R;vbui& literal 0 HcmV?d00001 -- 2.45.2 From 6e17988ac7bce78c7bf111650a159c678573efeb Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 2 Feb 2016 16:54:31 +0900 Subject: [PATCH 17/53] Comment & old TODO cleanup --- src/machi_admin_util.erl | 1 - src/machi_flu1_append_server.erl | 9 --------- 2 files changed, 10 deletions(-) diff --git a/src/machi_admin_util.erl b/src/machi_admin_util.erl index 35eeb7b..41a4b5f 100644 --- a/src/machi_admin_util.erl +++ b/src/machi_admin_util.erl @@ -91,7 +91,6 @@ verify_file_checksums_local2(Sock1, EpochID, Path0) -> verify_file_checksums_remote2(Sock1, EpochID, File) -> NSInfo = undefined, - io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), ReadChunk = fun(File_name, Offset, Size) -> ?FLU_C:read_chunk(Sock1, NSInfo, EpochID, File_name, Offset, Size, undefined) diff --git a/src/machi_flu1_append_server.erl b/src/machi_flu1_append_server.erl index 9a41776..a484410 100644 --- a/src/machi_flu1_append_server.erl +++ b/src/machi_flu1_append_server.erl @@ -94,15 +94,6 @@ handle_call({seq_append, _From2, _NSInfo, _EpochID, _Prefix, _Chunk, _TCSum, _Op handle_call({seq_append, _From2, NSInfo, EpochID, Prefix, Chunk, TCSum, Opts}, From, #state{flu_name=FluName, epoch_id=OldEpochId}=S) -> - %% io:format(user, " - %% HANDLE_CALL append_chunk - %% NSInfo=~p - %% epoch_id=~p - %% prefix=~p - %% chunk=~p - %% tcsum=~p - %% opts=~p\n", - %% [NSInfo, EpochID, Prefix, Chunk, TCSum, Opts]), %% Old is the one from our state, plain old 'EpochID' comes %% from the client. _ = case OldEpochId of -- 2.45.2 From fbb0203f67b9fd3847605af1623e2f78481d177e Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 8 Feb 2016 22:04:09 +0900 Subject: [PATCH 18/53] WIP: most eunit tests fixed, chain repair intermittently broken --- src/machi_chain_repair.erl | 20 +++- src/machi_cr_client.erl | 143 +++++++++++++------------- src/machi_dt.erl | 15 +-- src/machi_flu1_client.erl | 36 +++---- src/machi_flu1_net_server.erl | 5 +- src/machi_pb_high_client.erl | 8 +- src/machi_pb_translate.erl | 8 +- src/machi_proxy_flu1_client.erl | 14 +-- src/machi_util.erl | 2 + test/machi_ap_repair_eqc.erl | 2 +- test/machi_cr_client_test.erl | 8 +- test/machi_file_proxy_eqc.erl | 8 +- test/machi_flu1_test.erl | 6 +- test/machi_proxy_flu1_client_test.erl | 34 +++--- 14 files changed, 165 insertions(+), 144 deletions(-) diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index cb34da1..146fe65 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -105,6 +105,8 @@ repair(ap_mode=ConsistencyMode, Src, Repairing, UPI, MembersDict, ETS, Opts) -> RepairMode = proplists:get_value(repair_mode, Opts, repair), Verb = proplists:get_value(verbose, Opts, false), RepairId = proplists:get_value(repair_id, Opts, id1), +erlang:display(wtf), + %% io:format(user, "TODO: ~p\n", [{error, {What, Why, Stack}}]), Res = try _ = [begin {ok, Proxy} = machi_proxy_flu1_client:start_link(P), @@ -127,6 +129,7 @@ repair(ap_mode=ConsistencyMode, Src, Repairing, UPI, MembersDict, ETS, Opts) -> {ok, EpochID} = machi_proxy_flu1_client:get_epoch_id( SrcProxy, ?SHORT_TIMEOUT), %% ?VERB("Make repair directives: "), +erlang:display(yo1), Ds = [{File, make_repair_directives( ConsistencyMode, RepairMode, File, Size, EpochID, @@ -146,16 +149,21 @@ repair(ap_mode=ConsistencyMode, Src, Repairing, UPI, MembersDict, ETS, Opts) -> end || FLU <- OurFLUs], %% ?VERB("Execute repair directives: "), +erlang:display(yo1), ok = execute_repair_directives(ConsistencyMode, Ds, Src, EpochID, Verb, OurFLUs, ProxiesDict, ETS), +erlang:display(yo2), %% ?VERB(" done\n"), lager:info("Repair ~w repair directives finished\n", [RepairId]), ok catch What:Why -> +io:format(user, "yo3 ~p ~p\n", [What,Why]), Stack = erlang:get_stacktrace(), +io:format(user, "yo3 ~p\n", [Stack]), {error, {What, Why, Stack}} after +erlang:display(yo4), [(catch machi_proxy_flu1_client:quit(Pid)) || Pid <- orddict:to_list(get(proxies_dict))] end, @@ -236,7 +244,7 @@ make_repair_directives(ConsistencyMode, RepairMode, File, Size, _EpochID, make_repair_directives2(C2, ConsistencyMode, RepairMode, File, Verb, Src, FLUs, ProxiesDict, ETS) -> - ?VERB("."), + ?VERB(".1"), make_repair_directives3(C2, ConsistencyMode, RepairMode, File, Verb, Src, FLUs, ProxiesDict, ETS, []). @@ -327,17 +335,17 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> F = fun({copy, {Offset, Size, TaggedCSum, MySrc}, MyDsts}, Acc2) -> SrcP = orddict:fetch(MySrc, ProxiesDict), case ets:lookup_element(ETS, in_chunks, 2) rem 100 of - 0 -> ?VERB(".", []); + 0 -> ?VERB(".2", []); _ -> ok end, _T1 = os:timestamp(), %% TODO: support case multiple written or trimmed chunks returned NSInfo = undefined, - io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), - {ok, {[{_, Offset, Chunk, _}], _}} = + {ok, {[{_, Offset, Chunk, _ReadCSum}|OtherChunks], []=_TrimmedList}} = machi_proxy_flu1_client:read_chunk( SrcP, NSInfo, EpochID, File, Offset, Size, undefined, ?SHORT_TIMEOUT), + [] = OtherChunks, _T2 = os:timestamp(), <<_Tag:1/binary, CSum/binary>> = TaggedCSum, case machi_util:checksum_chunk(Chunk) of @@ -346,7 +354,7 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> DstP = orddict:fetch(DstFLU, ProxiesDict), _T3 = os:timestamp(), ok = machi_proxy_flu1_client:write_chunk( - DstP, NSInfo, EpochID, File, Offset, Chunk, + DstP, NSInfo, EpochID, File, Offset, Chunk, TaggedCSum, ?SHORT_TIMEOUT), _T4 = os:timestamp() end || DstFLU <- MyDsts], @@ -371,7 +379,9 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> Acc2 end end, +erlang:display({yo,?LINE}), ok = lists:foldl(F, ok, Cmds), +erlang:display({yo,?LINE}), %% Copy this file's stats to the total counts. _ = [ets:update_counter(ETS, T_K, ets:lookup_element(ETS, L_K, 2)) || {L_K, T_K} <- EtsKeys], diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index e77f9e2..cc4a508 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -63,7 +63,7 @@ %% File API append_chunk/5, append_chunk/6, append_chunk/7, - write_chunk/5, write_chunk/6, + write_chunk/6, write_chunk/7, read_chunk/6, read_chunk/7, trim_chunk/5, trim_chunk/6, checksum_list/2, checksum_list/3, @@ -129,14 +129,14 @@ append_chunk(PidSpec, NSInfo, Prefix, Chunk, CSum, #append_opts{}=Opts, Timeout0 %% allocated/sequenced by an earlier append_chunk() call) to %% `File' at `Offset'. -write_chunk(PidSpec, NSInfo, File, Offset, Chunk) -> - write_chunk(PidSpec, NSInfo, File, Offset, Chunk, ?DEFAULT_TIMEOUT). +write_chunk(PidSpec, NSInfo, File, Offset, Chunk, CSum) -> + write_chunk(PidSpec, NSInfo, File, Offset, Chunk, CSum, ?DEFAULT_TIMEOUT). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. -write_chunk(PidSpec, NSInfo, File, Offset, Chunk, Timeout0) -> +write_chunk(PidSpec, NSInfo, File, Offset, Chunk, CSum, Timeout0) -> {TO, Timeout} = timeout(Timeout0), - gen_server:call(PidSpec, {req, {write_chunk, NSInfo, File, Offset, Chunk, TO}}, + gen_server:call(PidSpec, {req, {write_chunk, NSInfo, File, Offset, Chunk, CSum, TO}}, Timeout). %% @doc Read a chunk of data of size `Size' from `File' at `Offset'. @@ -229,8 +229,8 @@ handle_call2({append_chunk, NSInfo, Prefix, Chunk, CSum, Opts, TO}, _From, S) -> do_append_head(NSInfo, Prefix, Chunk, CSum, Opts, 0, os:timestamp(), TO, S); -handle_call2({write_chunk, NSInfo, File, Offset, Chunk, TO}, _From, S) -> - do_write_head(NSInfo, File, Offset, Chunk, 0, os:timestamp(), TO, S); +handle_call2({write_chunk, NSInfo, File, Offset, Chunk, CSum, TO}, _From, S) -> + do_write_head(NSInfo, File, Offset, Chunk, CSum, 0, os:timestamp(), TO, S); handle_call2({read_chunk, NSInfo, File, Offset, Size, Opts, TO}, _From, S) -> do_read_chunk(NSInfo, File, Offset, Size, Opts, 0, os:timestamp(), TO, S); handle_call2({trim_chunk, NSInfo, File, Offset, Size, TO}, _From, S) -> @@ -246,7 +246,6 @@ do_append_head(NSInfo, Prefix, Chunk, CSum, Opts, Depth + 1, STime, TO, S); do_append_head(NSInfo, Prefix, Chunk, CSum, Opts, Depth, STime, TO, #state{proj=P}=S) -> - %% io:format(user, "head sleep1,", []), sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > TO -> @@ -300,9 +299,9 @@ do_append_head3(NSInfo, Prefix, case ?FLU_PC:append_chunk(Proxy, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, ?TIMEOUT) of {ok, {Offset, _Size, File}=_X} -> - do_append_midtail(RestFLUs, NSInfo, Prefix, + do_wr_app_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, - [HeadFLU], 0, STime, TO, S); + [HeadFLU], 0, STime, TO, append, S); {error, bad_checksum}=BadCS -> {reply, BadCS, S}; {error, Retry} @@ -323,17 +322,16 @@ do_append_head3(NSInfo, Prefix, Prefix,iolist_size(Chunk)}) end. -do_append_midtail(RestFLUs, NSInfo, Prefix, +do_wr_app_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, - Ws, Depth, STime, TO, S) + Ws, Depth, STime, TO, MyOp, S) when RestFLUs == [] orelse Depth == 0 -> - do_append_midtail2(RestFLUs, NSInfo, Prefix, + do_wr_app_midtail2(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, - Ws, Depth + 1, STime, TO, S); -do_append_midtail(_RestFLUs, NSInfo, Prefix, File, + Ws, Depth + 1, STime, TO, MyOp, S); +do_wr_app_midtail(_RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, - Ws, Depth, STime, TO, #state{proj=P}=S) -> - %% io:format(user, "midtail sleep2,", []), + Ws, Depth, STime, TO, MyOp, #state{proj=P}=S) -> sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > TO -> @@ -347,60 +345,66 @@ do_append_midtail(_RestFLUs, NSInfo, Prefix, File, RestFLUs2 = mutation_flus(P2), case RestFLUs2 -- Ws of RestFLUs2 -> - %% None of the writes that we have done so far - %% are to FLUs that are in the RestFLUs2 list. - %% We are pessimistic here and assume that - %% those FLUs are permanently dead. Start - %% over with a new sequencer assignment, at - %% the 2nd have of the impl (we have already - %% slept & refreshed the projection). - if Prefix == undefined -> % atom! not binary()!! {error, partition}; - true -> + MyOp == append -> + %% None of the writes that we have done so + %% far are to FLUs that are in the + %% RestFLUs2 list. We are pessimistic + %% here and assume that those FLUs are + %% permanently dead. Start over with a + %% new sequencer assignment, at the 2nd + %% have of the impl (we have already slept + %% & refreshed the projection). do_append_head2(NSInfo, Prefix, Chunk, CSum, Opts, - Depth, STime, TO, S2) + Depth, STime, TO, S2); + MyOp == write -> + do_wr_app_midtail2(RestFLUs2, + NSInfo, + Prefix, File, Offset, + Chunk, CSum, Opts, + Ws, Depth + 1, STime, TO, + MyOp, S2) end; RestFLUs3 -> - do_append_midtail2(RestFLUs3, + do_wr_app_midtail2(RestFLUs3, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, - Ws, Depth + 1, STime, TO, S2) + Ws, Depth + 1, STime, TO, + MyOp, S2) end end end. -do_append_midtail2([], _NSInfo, +do_wr_app_midtail2([], _NSInfo, _Prefix, File, Offset, Chunk, - _CSum, _Opts, _Ws, _Depth, _STime, _TO, S) -> - %% io:format(user, "ok!\n", []), + _CSum, _Opts, _Ws, _Depth, _STime, _TO, _MyOp, S) -> {reply, {ok, {Offset, chunk_wrapper_size(Chunk), File}}, S}; -do_append_midtail2([FLU|RestFLUs]=FLUs, NSInfo, +do_wr_app_midtail2([FLU|RestFLUs]=FLUs, NSInfo, Prefix, File, Offset, Chunk, - CSum, Opts, Ws, Depth, STime, TO, + CSum, Opts, Ws, Depth, STime, TO, MyOp, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(FLU, PD), - case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, ?TIMEOUT) of + case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, CSum, ?TIMEOUT) of ok -> - %% io:format(user, "write ~w,", [FLU]), - do_append_midtail2(RestFLUs, NSInfo, Prefix, + do_wr_app_midtail2(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, - CSum, Opts, [FLU|Ws], Depth, STime, TO, S); + CSum, Opts, [FLU|Ws], Depth, STime, TO, MyOp, S); {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_append_midtail(FLUs, NSInfo, Prefix, + do_wr_app_midtail(FLUs, NSInfo, Prefix, File, Offset, Chunk, - CSum, Opts, Ws, Depth, STime, TO, S); + CSum, Opts, Ws, Depth, STime, TO, MyOp, S); {error, written} -> %% We know what the chunk ought to be, so jump to the %% middle of read-repair. Resume = {append, Offset, iolist_size(Chunk), File}, - do_repair_chunk(FLUs, Resume, Chunk, [], NSInfo, File, Offset, + do_repair_chunk(FLUs, Resume, Chunk, CSum, [], NSInfo, File, Offset, iolist_size(Chunk), Depth, STime, S); {error, trimmed} = Err -> %% TODO: nothing can be done @@ -426,10 +430,9 @@ witnesses_use_our_epoch([FLU|RestFLUs], false end. -do_write_head(NSInfo, File, Offset, Chunk, 0=Depth, STime, TO, S) -> - do_write_head2(NSInfo, File, Offset, Chunk, Depth + 1, STime, TO, S); -do_write_head(NSInfo, File, Offset, Chunk, Depth, STime, TO, #state{proj=P}=S) -> - %% io:format(user, "head sleep1,", []), +do_write_head(NSInfo, File, Offset, Chunk, CSum, 0=Depth, STime, TO, S) -> + do_write_head2(NSInfo, File, Offset, Chunk, CSum, Depth + 1, STime, TO, S); +do_write_head(NSInfo, File, Offset, Chunk, CSum, Depth, STime, TO, #state{proj=P}=S) -> sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > TO -> @@ -443,31 +446,32 @@ do_write_head(NSInfo, File, Offset, Chunk, Depth, STime, TO, #state{proj=P}=S) - case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_write_head(NSInfo, File, Offset, Chunk, Depth + 1, + do_write_head(NSInfo, File, Offset, Chunk, CSum, Depth + 1, STime, TO, S2); _ -> - do_write_head2(NSInfo, File, Offset, Chunk, Depth + 1, + do_write_head2(NSInfo, File, Offset, Chunk, CSum, Depth + 1, STime, TO, S2) end end. -do_write_head2(NSInfo, File, Offset, Chunk, Depth, STime, TO, +do_write_head2(NSInfo, File, Offset, Chunk, CSum, Depth, STime, TO, #state{epoch_id=EpochID, proj=P, proxies_dict=PD}=S) -> [HeadFLU|RestFLUs] = mutation_flus(P), Proxy = orddict:fetch(HeadFLU, PD), - case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, ?TIMEOUT) of + case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, CSum, ?TIMEOUT) of ok -> %% From this point onward, we use the same code & logic path as %% append does. -Prefix=todo_prefix,CSum=todo_csum,Opts=todo_opts, - do_append_midtail(RestFLUs, NSInfo, Prefix, + Prefix=unused_write_path, + Opts=unused_write_path, + do_wr_app_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, - CSum, Opts, [HeadFLU], 0, STime, TO, S); + CSum, Opts, [HeadFLU], 0, STime, TO, write, S); {error, bad_checksum}=BadCS -> {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_write_head(NSInfo, File, Offset, Chunk, Depth, STime, TO, S); + do_write_head(NSInfo, File, Offset, Chunk, CSum, Depth, STime, TO, S); {error, written}=Err -> {reply, Err, S}; {error, trimmed}=Err -> @@ -588,7 +592,6 @@ do_trim_midtail(RestFLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth + 1, STime, TO, S); do_trim_midtail(_RestFLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth, STime, TO, #state{proj=P}=S) -> - %% io:format(user, "midtail sleep2,", []), sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > TO -> @@ -625,7 +628,6 @@ do_trim_midtail(_RestFLUs, Prefix, NSInfo, File, Offset, Size, do_trim_midtail2([], _Prefix, _NSInfo, _File, _Offset, _Size, _Ws, _Depth, _STime, _TO, S) -> - %% io:format(user, "ok!\n", []), {reply, ok, S}; do_trim_midtail2([FLU|RestFLUs]=FLUs, Prefix, NSInfo, File, Offset, Size, Ws, Depth, STime, TO, @@ -633,7 +635,6 @@ do_trim_midtail2([FLU|RestFLUs]=FLUs, Prefix, NSInfo, File, Offset, Size, Proxy = orddict:fetch(FLU, PD), case ?FLU_PC:trim_chunk(Proxy, NSInfo, EpochID, File, Offset, Size, ?TIMEOUT) of ok -> - %% io:format(user, "write ~w,", [FLU]), do_trim_midtail2(RestFLUs, Prefix, NSInfo, File, Offset, Size, [FLU|Ws], Depth, STime, TO, S); {error, trimmed} -> @@ -748,10 +749,11 @@ read_repair2(ap_mode=ConsistencyMode, do_repair_chunks([], _, _, _, _, _, _, _, S, Reply) -> {Reply, S}; -do_repair_chunks([{_, Offset, Chunk, _Csum}|T], +do_repair_chunks([{_, Offset, Chunk, CSum}|T], ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S, Reply) -> + true = _TODO_fixme = not is_atom(CSum), Size = iolist_size(Chunk), - case do_repair_chunk(ToRepair, ReturnMode, Chunk, [GotItFrom], NSInfo, File, Offset, + case do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, [GotItFrom], NSInfo, File, Offset, Size, Depth, STime, S) of {ok, Chunk, S1} -> do_repair_chunks(T, ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S1, Reply); @@ -759,9 +761,8 @@ do_repair_chunks([{_, Offset, Chunk, _Csum}|T], Error end. -do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, +do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth, STime, #state{proj=P}=S) -> - %% io:format(user, "read_repair3 sleep1,", []), sleep_a_while(Depth), DiffMs = timer:now_diff(os:timestamp(), STime) div 1000, if DiffMs > ?MAX_RUNTIME -> @@ -771,16 +772,16 @@ do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, case S2#state.proj of P2 when P2 == undefined orelse P2#projection_v1.upi == [] -> - do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, + do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth + 1, STime, S2); P2 -> ToRepair2 = mutation_flus(P2) -- Repaired, - do_repair_chunk2(ToRepair2, ReturnMode, Chunk, Repaired, NSInfo, File, + do_repair_chunk2(ToRepair2, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth + 1, STime, S2) end end. -do_repair_chunk2([], ReturnMode, Chunk, _Repaired, _NSInfo, File, Offset, +do_repair_chunk2([], ReturnMode, Chunk, _CSum, _Repaired, _NSInfo, File, Offset, _IgnoreSize, _Depth, _STime, S) -> %% TODO: add stats for # of repairs, length(_Repaired)-1, etc etc? case ReturnMode of @@ -789,24 +790,24 @@ do_repair_chunk2([], ReturnMode, Chunk, _Repaired, _NSInfo, File, Offset, {append, Offset, Size, File} -> {ok, {Offset, Size, File}, S} end; -do_repair_chunk2([First|Rest]=ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, Offset, +do_repair_chunk2([First|Rest]=ToRepair, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth, STime, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(First, PD), - case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, ?TIMEOUT) of + case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, CSum, ?TIMEOUT) of ok -> - do_repair_chunk2(Rest, ReturnMode, Chunk, [First|Repaired], NSInfo, File, + do_repair_chunk2(Rest, ReturnMode, Chunk, CSum, [First|Repaired], NSInfo, File, Offset, Size, Depth, STime, S); {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? {BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> - do_repair_chunk(ToRepair, ReturnMode, Chunk, Repaired, NSInfo, File, + do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth, STime, S); {error, written} -> %% TODO: To be very paranoid, read the chunk here to verify %% that it is exactly our Chunk. - do_repair_chunk2(Rest, ReturnMode, Chunk, Repaired, NSInfo, File, + do_repair_chunk2(Rest, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth, STime, S); {error, trimmed} = _Error -> %% TODO @@ -926,11 +927,13 @@ update_proj2(Count, #state{bad_proj=BadProj, proxies_dict=ProxiesDict, update_proj2(Count + 1, S); P when P >= BadProj -> #projection_v1{epoch_number=Epoch, epoch_csum=CSum, - members_dict=NewMembersDict} = P, + members_dict=NewMembersDict, dbg2=Dbg2} = P, EpochID = {Epoch, CSum}, ?FLU_PC:stop_proxies(ProxiesDict), NewProxiesDict = ?FLU_PC:start_proxies(NewMembersDict), - S#state{bad_proj=undefined, proj=P, epoch_id=EpochID, + %% Make crash reports shorter by getting rid of 'react' history. + P2 = P#projection_v1{dbg2=lists:keydelete(react, 1, Dbg2)}, + S#state{bad_proj=undefined, proj=P2, epoch_id=EpochID, members_dict=NewMembersDict, proxies_dict=NewProxiesDict}; _P -> sleep_a_while(Count), diff --git a/src/machi_dt.erl b/src/machi_dt.erl index de34b64..6a57e86 100644 --- a/src/machi_dt.erl +++ b/src/machi_dt.erl @@ -24,11 +24,12 @@ -include("machi_projection.hrl"). -type append_opts() :: #append_opts{}. --type chunk() :: chunk_bin() | {chunk_csum(), chunk_bin()}. --type chunk_bin() :: binary() | iolist(). % client can use either --type chunk_csum() :: binary(). % 1 byte tag, N-1 bytes checksum --type chunk_summary() :: {file_offset(), chunk_size(), binary()}. --type chunk_s() :: 'trimmed' | binary(). +-type chunk() :: chunk_bin() | iolist(). % client can choose either rep. +-type chunk_bin() :: binary(). % server returns binary() only. +-type chunk_csum() :: <<>> | chunk_csum_bin() | {csum_tag(), binary()}. +-type chunk_csum_bin() :: binary(). % 1 byte tag, N-1 bytes checksum +-type chunk_cstrm() :: 'trimmed' | chunk_csum(). +-type chunk_summary() :: {file_offset(), chunk_size(), chunk_bin(), chunk_cstrm()}. -type chunk_pos() :: {file_offset(), chunk_size(), file_name_s()}. -type chunk_size() :: non_neg_integer(). -type error_general() :: 'bad_arg' | 'wedged' | 'bad_checksum'. @@ -62,9 +63,9 @@ chunk/0, chunk_bin/0, chunk_csum/0, - csum_tag/0, + chunk_csum_bin/0, + chunk_cstrm/0, chunk_summary/0, - chunk_s/0, chunk_pos/0, chunk_size/0, error_general/0, diff --git a/src/machi_flu1_client.erl b/src/machi_flu1_client.erl index bf36fee..37e6d5a 100644 --- a/src/machi_flu1_client.erl +++ b/src/machi_flu1_client.erl @@ -145,7 +145,7 @@ ]). %% For "internal" replication only. -export([ - write_chunk/6, write_chunk/7, + write_chunk/7, write_chunk/8, trim_chunk/6, delete_migration/3, delete_migration/4, trunc_hack/3, trunc_hack/4 @@ -216,7 +216,7 @@ append_chunk(Host, TcpPort, NSInfo0, EpochID, -spec read_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:read_opts_x()) -> - {ok, machi_dt:chunk_s()} | + {ok, {[machi_dt:chunk_summary()], [machi_dt:chunk_pos()]}} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, term()}. read_chunk(Sock, NSInfo0, EpochID, File, Offset, Size, Opts0) @@ -230,7 +230,7 @@ read_chunk(Sock, NSInfo0, EpochID, File, Offset, Size, Opts0) -spec read_chunk(machi_dt:inet_host(), machi_dt:inet_port(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:read_opts_x()) -> - {ok, machi_dt:chunk_s()} | + {ok, [machi_dt:chunk_summary()]} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, term()}. read_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Size, Opts0) @@ -527,25 +527,25 @@ disconnect(_) -> %% @doc Restricted API: Write a chunk of already-sequenced data to %% `File' at `Offset'. --spec write_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) -> +-spec write_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk(), machi_dt:chunk_csum()) -> ok | {error, machi_dt:error_general()} | {error, term()}. -write_chunk(Sock, NSInfo0, EpochID, File, Offset, Chunk) +write_chunk(Sock, NSInfo0, EpochID, File, Offset, Chunk, CSum) when Offset >= ?MINIMUM_OFFSET -> NSInfo = machi_util:ns_info_default(NSInfo0), - write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk). + write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk, CSum). %% @doc Restricted API: Write a chunk of already-sequenced data to %% `File' at `Offset'. -spec write_chunk(machi_dt:inet_host(), machi_dt:inet_port(), - 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) -> + 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk(), machi_dt:chunk_csum()) -> ok | {error, machi_dt:error_general()} | {error, term()}. -write_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Chunk) +write_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Chunk, CSum) when Offset >= ?MINIMUM_OFFSET -> Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), try NSInfo = machi_util:ns_info_default(NSInfo0), - write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk) + write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk, CSum) after disconnect(Sock) end. @@ -641,19 +641,19 @@ append_chunk2(Sock, NSInfo, EpochID, Prefix, Chunk, CSum_tag, CSum, Opts}), do_pb_request_common(Sock, ReqID, Req, true, Timeout). -write_chunk2(Sock, NSInfo, EpochID, File0, Offset, Chunk0) -> +write_chunk2(Sock, NSInfo, EpochID, File0, Offset, Chunk, CSum0) -> ReqID = <<"id">>, #ns_info{version=NSVersion, name=NS} = NSInfo, File = machi_util:make_binary(File0), true = (Offset >= ?MINIMUM_OFFSET), - {Chunk, CSum_tag, CSum} = - case Chunk0 of - X when is_binary(X) -> - {Chunk0, ?CSUM_TAG_NONE, <<>>}; - {ChunkCSum, Chk} -> - {Tag, CS} = machi_util:unmake_tagged_csum(ChunkCSum), - {Chk, Tag, CS} - end, + {CSum_tag, CSum} = case CSum0 of + <<>> -> + {?CSUM_TAG_NONE, <<>>}; + {_Tag, _CS} -> + CSum0; + B when is_binary(B) -> + machi_util:unmake_tagged_csum(CSum0) + end, Req = machi_pb_translate:to_pb_request( ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}), diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 5bbabba..7a9f549 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -586,12 +586,11 @@ do_pb_hl_request2({high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}, Res = machi_cr_client:append_chunk(Clnt, NSInfo, Prefix, Chunk, TaggedCSum, Opts), {Res, S}; -do_pb_hl_request2({high_write_chunk, File, Offset, ChunkBin, TaggedCSum}, +do_pb_hl_request2({high_write_chunk, File, Offset, Chunk, CSum}, #state{high_clnt=Clnt}=S) -> NSInfo = undefined, io:format(user, "TODO fix broken write_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), - Chunk = {TaggedCSum, ChunkBin}, - Res = machi_cr_client:write_chunk(Clnt, NSInfo, File, Offset, Chunk), + Res = machi_cr_client:write_chunk(Clnt, NSInfo, File, Offset, Chunk, CSum), {Res, S}; do_pb_hl_request2({high_read_chunk, File, Offset, Size, Opts}, #state{high_clnt=Clnt}=S) -> diff --git a/src/machi_pb_high_client.erl b/src/machi_pb_high_client.erl index 85600bd..23f01b8 100644 --- a/src/machi_pb_high_client.erl +++ b/src/machi_pb_high_client.erl @@ -102,7 +102,7 @@ auth(PidSpec, User, Pass, Timeout) -> -spec append_chunk(pid(), NS::machi_dt:namespace(), Prefix::machi_dt:file_prefix(), - Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum(), + Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum(), Opts::machi_dt:append_opts()) -> {ok, Filename::string(), Offset::machi_dt:file_offset()} | {error, machi_client_error_reason()}. @@ -111,7 +111,7 @@ append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts) -> -spec append_chunk(pid(), NS::machi_dt:namespace(), Prefix::machi_dt:file_prefix(), - Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum(), + Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum(), Opts::machi_dt:append_opts(), Timeout::non_neg_integer()) -> {ok, Filename::string(), Offset::machi_dt:file_offset()} | @@ -120,13 +120,13 @@ append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts, Timeout) -> send_sync(PidSpec, {append_chunk, NS, Prefix, Chunk, CSum, Opts}, Timeout). -spec write_chunk(pid(), File::string(), machi_dt:file_offset(), - Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum()) -> + Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum()) -> ok | {error, machi_client_error_reason()}. write_chunk(PidSpec, File, Offset, Chunk, CSum) -> write_chunk(PidSpec, File, Offset, Chunk, CSum, ?DEFAULT_TIMEOUT). -spec write_chunk(pid(), File::string(), machi_dt:file_offset(), - Chunk::machi_dt:chunk_bin(), CSum::machi_dt:chunk_csum(), Timeout::non_neg_integer()) -> + Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum(), Timeout::non_neg_integer()) -> ok | {error, machi_client_error_reason()}. write_chunk(PidSpec, File, Offset, Chunk, CSum, Timeout) -> send_sync(PidSpec, {write_chunk, File, Offset, Chunk, CSum}, Timeout). diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 20aa897..707a339 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -196,9 +196,9 @@ from_pb_request(#mpb_request{req_id=ReqID, #mpb_writechunkreq{chunk=#mpb_chunk{file_name=File, offset=Offset, chunk=Chunk, - csum=CSum}} = IR, - TaggedCSum = make_tagged_csum(CSum, Chunk), - {ReqID, {high_write_chunk, File, Offset, Chunk, TaggedCSum}}; + csum=CSumRec}} = IR, + CSum = make_tagged_csum(CSumRec, Chunk), + {ReqID, {high_write_chunk, File, Offset, Chunk, CSum}}; from_pb_request(#mpb_request{req_id=ReqID, read_chunk=IR=#mpb_readchunkreq{}}) -> #mpb_readchunkreq{chunk_pos=#mpb_chunkpos{file_name=File, @@ -732,7 +732,7 @@ to_pb_response(ReqID, {high_append_chunk, _NS, _Prefix, _Chunk, _TSum, _O}, Resp _Else -> make_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) end; -to_pb_response(ReqID, {high_write_chunk, _File, _Offset, _Chunk, _TaggedCSum}, Resp) -> +to_pb_response(ReqID, {high_write_chunk, _File, _Offset, _Chunk, _CSum}, Resp) -> case Resp of {ok, {_,_,_}} -> %% machi_cr_client returns ok 2-tuple, convert to simple ok. diff --git a/src/machi_proxy_flu1_client.erl b/src/machi_proxy_flu1_client.erl index 5a85cd3..8f9dcf6 100644 --- a/src/machi_proxy_flu1_client.erl +++ b/src/machi_proxy_flu1_client.erl @@ -81,7 +81,7 @@ quit/1, %% Internal API - write_chunk/6, write_chunk/7, + write_chunk/7, write_chunk/8, trim_chunk/6, trim_chunk/7, %% Helpers @@ -280,14 +280,14 @@ quit(PidSpec) -> %% @doc Write a chunk (binary- or iolist-style) of data to a file %% with `Prefix' at `Offset'. -write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk) -> - write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, infinity). +write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum) -> + write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum, infinity). %% @doc Write a chunk (binary- or iolist-style) of data to a file %% with `Prefix' at `Offset'. -write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) -> - case gen_server:call(PidSpec, {req, {write_chunk, NSInfo, EpochID, File, Offset, Chunk}}, +write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum, Timeout) -> + case gen_server:call(PidSpec, {req, {write_chunk, NSInfo, EpochID, File, Offset, Chunk, CSum}}, Timeout) of {error, written}=Err -> Size = byte_size(Chunk), @@ -384,9 +384,9 @@ make_req_fun({append_chunk, NSInfo, EpochID, make_req_fun({read_chunk, NSInfo, EpochID, File, Offset, Size, Opts}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:read_chunk(Sock, NSInfo, EpochID, File, Offset, Size, Opts) end; -make_req_fun({write_chunk, NSInfo, EpochID, File, Offset, Chunk}, +make_req_fun({write_chunk, NSInfo, EpochID, File, Offset, Chunk, CSum}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> - fun() -> Mod:write_chunk(Sock, NSInfo, EpochID, File, Offset, Chunk) end; + fun() -> Mod:write_chunk(Sock, NSInfo, EpochID, File, Offset, Chunk, CSum) end; make_req_fun({trim_chunk, NSInfo, EpochID, File, Offset, Size}, #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> fun() -> Mod:trim_chunk(Sock, NSInfo, EpochID, File, Offset, Size) end; diff --git a/src/machi_util.erl b/src/machi_util.erl index 8173898..95a42a5 100644 --- a/src/machi_util.erl +++ b/src/machi_util.erl @@ -448,6 +448,8 @@ int2bool(I) when is_integer(I) -> true. read_opts_default(#read_opts{}=NSInfo) -> NSInfo; +read_opts_default(A) when A == 'undefined'; A == 'noopt'; A == 'none' -> + #read_opts{}; read_opts_default(A) when is_atom(A) -> #read_opts{}. diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 14b8005..bbb8717 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -484,7 +484,7 @@ eqc_verbose() -> os:getenv("EQC_VERBOSE") =:= "true". eqc_timeout(Default) -> - PropTimeout = case os:getenv("EQC_TIMEOUT") of + PropTimeout = case os:getenv("EQC_TIME") of false -> Default; V -> list_to_integer(V) end, diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index 885fb35..7e4d31c 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -139,7 +139,7 @@ smoke_test2() -> Host, PortBase+X, NSInfo, EpochID, File1, FooOff1, Size1, undefined) || X <- [0,1,2] ], ok = machi_flu1_client:write_chunk(Host, PortBase+0, NSInfo, EpochID, - File1, FooOff1, Chunk1), + File1, FooOff1, Chunk1, NoCSum), {ok, {[{_, FooOff1, Chunk1, _}], []}} = machi_flu1_client:read_chunk(Host, PortBase+0, NSInfo, EpochID, File1, FooOff1, Size1, undefined), @@ -156,7 +156,7 @@ smoke_test2() -> Chunk2 = <<"Middle repair chunk">>, Size2 = size(Chunk2), ok = machi_flu1_client:write_chunk(Host, PortBase+1, NSInfo, EpochID, - File1, FooOff2, Chunk2), + File1, FooOff2, Chunk2, NoCSum), {ok, {[{_, FooOff2, Chunk2, _}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, undefined), [{X,{ok, {[{_, FooOff2, Chunk2, _}], []}}} = @@ -196,7 +196,7 @@ smoke_test2() -> %% {error,not_written} = machi_cr_client:read_chunk(C1, NSInfo, File10, %% Offx, Size10), {ok, {Offx,Size10,File10}} = - machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10), + machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10, NoCSum), {ok, {[{_, Offx, Chunk10, _}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File10, Offx, Size10, undefined) end || Seq <- lists:seq(1, Extra10)], @@ -286,7 +286,7 @@ witness_smoke_test2() -> File10 = File1, Offx = Off1 + (1 * Size10), {error, partition} = - machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10, 1*1000), + machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10, NoCSum, 1*1000), ok after diff --git a/test/machi_file_proxy_eqc.erl b/test/machi_file_proxy_eqc.erl index dd36787..c7a50e2 100644 --- a/test/machi_file_proxy_eqc.erl +++ b/test/machi_file_proxy_eqc.erl @@ -35,10 +35,14 @@ %% EUNIT TEST DEFINITION eqc_test_() -> - {timeout, 60, + PropTimeout = case os:getenv("EQC_TIME") of + false -> 30; + V -> list_to_integer(V) + end, + {timeout, PropTimeout*2 + 30, {spawn, [ - ?_assertEqual(true, eqc:quickcheck(eqc:testing_time(30, ?QC_OUT(prop_ok())))) + ?_assertEqual(true, eqc:quickcheck(eqc:testing_time(PropTimeout, ?QC_OUT(prop_ok())))) ] }}. diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index fc34a51..74490d2 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -161,9 +161,9 @@ flu_smoke_test() -> Off2 = ?MINIMUM_OFFSET + 77, File2 = "smoke-whole-file^^0^1^1", ok = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - File2, Off2, Chunk2), + File2, Off2, Chunk2, NoCSum), {error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, - BadFile, Off2, Chunk2), + BadFile, Off2, Chunk2, NoCSum), {ok, {[{_, Off2, Chunk2, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, noopt), {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, @@ -262,7 +262,7 @@ bad_checksum_test() -> try Prefix = <<"some prefix">>, Chunk1 = <<"yo yo yo">>, - BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, ".................")}, {error, bad_checksum} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, Prefix, diff --git a/test/machi_proxy_flu1_client_test.erl b/test/machi_proxy_flu1_client_test.erl index 617afff..7f8dcce 100644 --- a/test/machi_proxy_flu1_client_test.erl +++ b/test/machi_proxy_flu1_client_test.erl @@ -61,24 +61,26 @@ api_smoke_test() -> {ok, {MyOff,MySize,MyFile}} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, NoCSum), - {ok, {[{_, MyOff, MyChunk, _}], []}} = + {ok, {[{_, MyOff, MyChunk, _MyChunkCSUM}], []}} = ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile, MyOff, MySize, undefined), - MyChunk2 = <<"my chunk data, yeah, again">>, + MyChunk2_parts = [<<"my chunk ">>, "data", <<", yeah, again">>], + MyChunk2 = iolist_to_binary(MyChunk2_parts), Opts1 = #append_opts{chunk_extra=4242}, {ok, {MyOff2,MySize2,MyFile2}} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, - MyChunk2, NoCSum, Opts1, infinity), - {ok, {[{_, MyOff2, MyChunk2, _}], []}} = - ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile2, MyOff2, MySize2, undefined), - BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, + MyChunk2_parts, NoCSum, Opts1, infinity), + [{ok, {[{_, MyOff2, MyChunk2, _}], []}} = + ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile2, MyOff2, MySize2, DefaultOptions) || + DefaultOptions <- [undefined, noopt, none, any_atom_at_all] ], + + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, "...................")}, {error, bad_checksum} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, BadCSum), - Opts2 = #append_opts{chunk_extra=99832}, -io:format(user, "\nTODO: fix write_chunk() call below @ ~s LINE ~w\n", [?MODULE,?LINE]), - %% {error, bad_checksum} = ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, - %% <<"foo-file^^0^1^1">>, - %% MyChunk, BadCSum, - %% Opts2, infinity), + {error, bad_checksum} = ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, + MyFile2, + MyOff2 + size(MyChunk2), + MyChunk, BadCSum, + infinity), %% Put kick_projection_reaction() in the middle of the test so %% that any problems with its async nature will (hopefully) @@ -283,20 +285,20 @@ flu_restart_test2() -> fun(run) -> ok = ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, - Data, infinity), + Data, NoCSum, infinity), ok; (line) -> io:format("line ~p, ", [?LINE]); (stop) -> ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, - Data, infinity) + Data, NoCSum, infinity) end, fun(run) -> {error, written} = ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, - Dataxx, infinity), + Dataxx, NoCSum, infinity), ok; (line) -> io:format("line ~p, ", [?LINE]); (stop) -> ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, - Dataxx, infinity) + Dataxx, NoCSum, infinity) end ], -- 2.45.2 From a7f42d636e05f5fb82a669049af0a733995972cd Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 9 Feb 2016 01:27:58 +0900 Subject: [PATCH 19/53] WIP: narrowing in on repair problems due to double-write errors --- src/machi_chain_manager1.erl | 15 +++++++++------ src/machi_chain_repair.erl | 14 +++++++++++++- src/machi_cr_client.erl | 4 ++-- src/machi_flu_filename_mgr.erl | 6 +++++- test/machi_ap_repair_eqc.erl | 4 +++- 5 files changed, 32 insertions(+), 11 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 7f112d0..075d834 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -2967,7 +2967,8 @@ zerf_find_last_annotated(FLU, MajoritySize, S) -> end. perhaps_verbose_c111(P_latest2, S) -> - case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of + case true of + %%TODO put me back: case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of true -> Dbg2X = lists:keydelete(react, 1, P_latest2#projection_v1.dbg2) ++ @@ -2975,16 +2976,18 @@ perhaps_verbose_c111(P_latest2, S) -> P_latest2x = P_latest2#projection_v1{dbg2=Dbg2X}, % limit verbose len. Last2 = get(last_verbose), Summ2 = machi_projection:make_summary(P_latest2x), - if P_latest2#projection_v1.upi == [], - (S#ch_mgr.proj)#projection_v1.upi /= [] -> + %% if P_latest2#projection_v1.upi == [], + %% (S#ch_mgr.proj)#projection_v1.upi /= [] -> + if true -> <> = P_latest2#projection_v1.epoch_csum, - io:format(user, "\n~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), (S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]); + io:format(user, "~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), (S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]); true -> ok end, - case proplists:get_value(private_write_verbose, - S#ch_mgr.opts) of + %% TODO put me back: case proplists:get_value(private_write_verbose, + %% S#ch_mgr.opts) of + case true of true when Summ2 /= Last2 -> put(last_verbose, Summ2), ?V("\n~s ~p uses plain: ~w \n", diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index 146fe65..f249268 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -274,7 +274,19 @@ make_repair_directives3([{Offset, Size, CSum, _FLU}=A|Rest0], %% byte range from all FLUs %% 3b. Log big warning about data loss. %% 4. Log any other checksum discrepencies as they are found. - exit({todo_repair_sanity_check, ?LINE, File, Offset, As}) + QQ = [begin + Pxy = orddict:fetch(FLU, ProxiesDict), + {ok, EpochID} = machi_proxy_flu1_client:get_epoch_id( + Pxy, ?SHORT_TIMEOUT), + NSInfo = undefined, + XX = machi_proxy_flu1_client:read_chunk( + Pxy, NSInfo, EpochID, File, Offset, Size, undefined, + ?SHORT_TIMEOUT), + {FLU, XX} + end || {__Offset, __Size, __CSum, FLU} <- As], + + exit({todo_repair_sanity_check, ?LINE, File, Offset, {as,As}, {qq,QQ}}) + %% exit({todo_repair_sanity_check, ?LINE, File, Offset, As}) end, %% List construction guarantees us that there's at least one ?MAX_OFFSET %% item remains. Sort order + our "taking" of all exact Offset+Size diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index cc4a508..19a6d1a 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -786,9 +786,9 @@ do_repair_chunk2([], ReturnMode, Chunk, _CSum, _Repaired, _NSInfo, File, Offset, %% TODO: add stats for # of repairs, length(_Repaired)-1, etc etc? case ReturnMode of read -> - {ok, Chunk, S}; + {reply, {ok, {[Chunk], []}}, S}; {append, Offset, Size, File} -> - {ok, {Offset, Size, File}, S} + {reply, {ok, {[{Offset, Size, File}], []}}, S} end; do_repair_chunk2([First|Rest]=ToRepair, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offset, Size, Depth, STime, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 7140266..1e504df 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -231,10 +231,14 @@ find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N) -> end. generate_filename(DataDir, NS, NSLocator, Prefix, N) -> +{A,B,C} = erlang:now(), +TODO = lists:flatten(filename:basename(DataDir) ++ "," ++ io_lib:format("~w,~w,~w", [A,B,C])), {F, _} = machi_util:make_data_filename( DataDir, NS, NSLocator, Prefix, - generate_uuid_v4_str(), +TODO, + %% TODO put me back!! + %% generate_uuid_v4_str(), N), binary_to_list(F). diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index bbb8717..0f5f5a2 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -121,7 +121,9 @@ append(CRIndex, Bin, #state{verbose=V}=S) -> NSInfo = #ns_info{}, NoCSum = <<>>, Opts1 = #append_opts{}, +io:format(user, "append_chunk ~p ~P ->\n", [Prefix, Bin, 6]), Res = (catch machi_cr_client:append_chunk(C, NSInfo, Prefix, Bin, NoCSum, Opts1, sec(1))), +io:format(user, "append_chunk ~p ~P ->\n ~p\n", [Prefix, Bin, 6, Res]), case Res of {ok, {_Off, Len, _FileName}=Key} -> case ets:insert_new(?WRITTEN_TAB, {Key, Bin}) of @@ -188,6 +190,7 @@ change_partition(Partition, [] -> ?V("## Turn OFF partition: ~w~n", [Partition]); _ -> ?V("## Turn ON partition: ~w~n", [Partition]) end || Verbose], + io:format(user, "partition ~p\n", [Partition]), machi_partition_simulator:always_these_partitions(Partition), _ = machi_partition_simulator:get(FLUNames), %% Don't wait for stable chain, tick will be executed on demand @@ -456,7 +459,6 @@ assert_chunk(C, {Off, Len, FileName}=Key, Bin) -> FileNameStr = binary_to_list(FileName), %% TODO : Use CSum instead of binary (after disuccsion about CSum is calmed down?) NSInfo = undefined, - io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), case (catch machi_cr_client:read_chunk(C, NSInfo, FileName, Off, Len, undefined, sec(3))) of {ok, {[{FileNameStr, Off, Bin, _}], []}} -> ok; -- 2.45.2 From e882f774ef5be1981b2474a3fe2153696df90a1c Mon Sep 17 00:00:00 2001 From: UENISHI Kota Date: Mon, 14 Dec 2015 16:34:57 +0900 Subject: [PATCH 20/53] Unify LevelDB usage to single instance * Perfile LevelDB instance usage are changed to use single instance per FLU server. * machi_csum_file reference is managed with machi_flu_filename_mgr as an aim to manage filenames with leveldb * Not only chunk checksums, but the list of trimmed files are also stored in LevelDB. * Remove 1024 bytes file header; instead put any metadata into LevelDB if needed. * LevelDB `db_ref()` lifecycle is same as that of `machi_metadata_mgr` * `machi_file_proxy` just uses it as it's passed at process startup * There are several optimization space still left as it is WIP --- include/machi.hrl | 2 +- src/machi_csum_table.erl | 297 +++++++++++++++++++---------- src/machi_file_proxy.erl | 164 ++++++++-------- src/machi_file_proxy_sup.erl | 3 +- src/machi_flu1_net_server.erl | 19 +- src/machi_flu_filename_mgr.erl | 36 +++- src/machi_flu_metadata_mgr.erl | 63 ++---- src/machi_plist.erl | 69 ------- test/machi_csum_table_test.erl | 106 +++++----- test/machi_file_proxy_eqc.erl | 43 ++++- test/machi_file_proxy_test.erl | 19 +- test/machi_flu1_test.erl | 4 +- test/machi_pb_high_client_test.erl | 16 +- test/machi_plist_test.erl | 17 -- 14 files changed, 447 insertions(+), 411 deletions(-) delete mode 100644 src/machi_plist.erl delete mode 100644 test/machi_plist_test.erl diff --git a/include/machi.hrl b/include/machi.hrl index f825556..c15133a 100644 --- a/include/machi.hrl +++ b/include/machi.hrl @@ -21,7 +21,7 @@ %% @doc Now 4GiBytes, could be up to 64bit due to PB message limit of %% chunk size -define(DEFAULT_MAX_FILE_SIZE, ((1 bsl 32) - 1)). --define(MINIMUM_OFFSET, 1024). +-define(MINIMUM_OFFSET, 0). %% 0th draft of checksum typing with 1st byte. -define(CSUM_TAG_NONE, 0). % No csum provided by client diff --git a/src/machi_csum_table.erl b/src/machi_csum_table.erl index cc4dd08..7ac79a2 100644 --- a/src/machi_csum_table.erl +++ b/src/machi_csum_table.erl @@ -1,20 +1,26 @@ -module(machi_csum_table). +%% @doc Object Database mapper that translates +%% (file, checksum, offset, size)|(trimmed-file) <-> LevelDB key and value +%% Keys and values are both encoded with sext. + -export([open/2, - find/3, - write/6, write/4, trim/5, - find_leftneighbor/2, find_rightneighbor/2, - all_trimmed/3, any_trimmed/3, - all_trimmed/2, - calc_unwritten_bytes/1, + find/4, + write/7, write/5, trim/6, + find_leftneighbor/3, find_rightneighbor/3, + is_file_trimmed/2, + all_trimmed/4, any_trimmed/4, + calc_unwritten_bytes/2, split_checksum_list_blob_decode/1, - all/1, - close/1, delete/1, - foldl_chunks/3]). + all/2, all_files/1, + close/1, maybe_trim_file/3, + foldl_file_chunks/4, foldl_chunks/3]). -include("machi.hrl"). -ifdef(TEST). +-export([all/2]). + -include_lib("eunit/include/eunit.hrl"). -endif. @@ -40,12 +46,9 @@ open(CSumFilename, _Opts) -> %% operating system's file cache, which is for %% Machi's main read efficiency {total_leveldb_mem_percent, 10}], + ok = filelib:ensure_dir(CSumFilename), + {ok, T} = eleveldb:open(CSumFilename, LevelDBOptions), - %% Dummy entry for reserved headers - ok = eleveldb:put(T, - sext:encode({0, ?MINIMUM_OFFSET}), - sext:encode(?CSUM_TAG_NONE_ATOM), - [{sync, true}]), C0 = #machi_csum_table{ file=CSumFilename, table=T}, @@ -55,61 +58,53 @@ open(CSumFilename, _Opts) -> split_checksum_list_blob_decode(Bin) -> erlang:binary_to_term(Bin). - -define(has_overlap(LeftOffset, LeftSize, RightOffset, RightSize), ((LeftOffset - (RightOffset+RightSize)) * (LeftOffset+LeftSize - RightOffset) < 0)). --spec find(table(), machi_dt:file_offset(), machi_dt:chunk_size()) +-spec find(table(), binary(), machi_dt:file_offset(), machi_dt:chunk_size()) -> [chunk()]. -find(#machi_csum_table{table=T}, Offset, Size) -> - {ok, I} = eleveldb:iterator(T, [], keys_only), - EndKey = sext:encode({Offset+Size, 0}), - StartKey = sext:encode({Offset, Size}), +find(#machi_csum_table{table=T}, Filename, Offset, Size) when is_binary(Filename) -> + EndKey = sext:encode({Filename, Offset+Size, 0}), - {ok, FirstKey} = case eleveldb:iterator_move(I, StartKey) of - {error, invalid_iterator} -> - eleveldb:iterator_move(I, first); - {ok, _} = R0 -> - case eleveldb:iterator_move(I, prev) of - {error, invalid_iterator} -> - R0; - {ok, _} = R1 -> - R1 - end - end, - _ = eleveldb:iterator_close(I), - FoldFun = fun({K, V}, Acc) -> - {TargetOffset, TargetSize} = sext:decode(K), - case ?has_overlap(TargetOffset, TargetSize, Offset, Size) of - true -> - [{TargetOffset, TargetSize, sext:decode(V)}|Acc]; - false -> + case search_for_start_key(T, Filename, Offset, Size) of + undefined -> []; + FirstKey -> + + FoldFun = fun({K, V}, Acc) -> + {Filename, TargetOffset, TargetSize} = sext:decode(K), + case ?has_overlap(TargetOffset, TargetSize, Offset, Size) of + true -> + [{TargetOffset, TargetSize, sext:decode(V)}|Acc]; + false -> + Acc + end; + (_K, Acc) -> + lager:error("~p wrong option", [_K]), Acc - end; - (_K, Acc) -> - lager:error("~p wrong option", [_K]), - Acc - end, - lists:reverse(eleveldb_fold(T, FirstKey, EndKey, FoldFun, [])). + end, + lists:reverse(eleveldb_fold(T, FirstKey, EndKey, FoldFun, [])) + end. %% @doc Updates all chunk info, by deleting existing entries if exists %% and putting new chunk info --spec write(table(), +-spec write(table(), binary(), machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:chunk_csum()|'none'|'trimmed', undefined|chunk(), undefined|chunk()) -> ok | {error, term()}. -write(#machi_csum_table{table=T} = CsumT, Offset, Size, CSum, - LeftUpdate, RightUpdate) -> +write(#machi_csum_table{table=T} = CsumT, Filename, + Offset, Size, CSum, LeftUpdate, RightUpdate) when is_binary(Filename) -> + FileEntry = {put, sext:encode({file, Filename}), sext:encode(existent)}, PutOps = [{put, - sext:encode({Offset, Size}), - sext:encode(CSum)}] + sext:encode({Filename, Offset, Size}), + sext:encode(CSum)}, + FileEntry] ++ case LeftUpdate of {LO, LS, LCsum} when LO + LS =:= Offset -> [{put, - sext:encode({LO, LS}), + sext:encode({Filename, LO, LS}), sext:encode(LCsum)}]; undefined -> [] @@ -117,58 +112,68 @@ write(#machi_csum_table{table=T} = CsumT, Offset, Size, CSum, ++ case RightUpdate of {RO, RS, RCsum} when RO =:= Offset + Size -> [{put, - sext:encode({RO, RS}), + sext:encode({Filename, RO, RS}), sext:encode(RCsum)}]; undefined -> [] end, - Chunks = find(CsumT, Offset, Size), + Chunks = find(CsumT, Filename, Offset, Size), DeleteOps = lists:map(fun({O, L, _}) -> - {delete, sext:encode({O, L})} + {delete, sext:encode({Filename, O, L})} end, Chunks), eleveldb:write(T, DeleteOps ++ PutOps, [{sync, true}]). --spec find_leftneighbor(table(), non_neg_integer()) -> +-spec find_leftneighbor(table(), binary(), non_neg_integer()) -> undefined | chunk(). -find_leftneighbor(CsumT, Offset) -> - case find(CsumT, Offset, 1) of +find_leftneighbor(CsumT, Filename, Offset) when is_binary(Filename) -> + case find(CsumT, Filename, Offset, 1) of [] -> undefined; [{Offset, _, _}] -> undefined; [{LOffset, _, CsumOrTrimmed}] -> {LOffset, Offset - LOffset, CsumOrTrimmed} end. --spec find_rightneighbor(table(), non_neg_integer()) -> +-spec find_rightneighbor(table(), binary(), non_neg_integer()) -> undefined | chunk(). -find_rightneighbor(CsumT, Offset) -> - case find(CsumT, Offset, 1) of +find_rightneighbor(CsumT, Filename, Offset) when is_binary(Filename) -> + case find(CsumT, Filename, Offset, 1) of [] -> undefined; [{Offset, _, _}] -> undefined; [{ROffset, RSize, CsumOrTrimmed}] -> {Offset, ROffset + RSize - Offset, CsumOrTrimmed} end. --spec write(table(), machi_dt:file_offset(), machi_dt:file_size(), +-spec write(table(), binary(), machi_dt:file_offset(), machi_dt:file_size(), machi_dt:chunk_csum()|none|trimmed) -> ok | {error, trimmed|file:posix()}. -write(CsumT, Offset, Size, CSum) -> - write(CsumT, Offset, Size, CSum, undefined, undefined). +write(CsumT, Filename, Offset, Size, CSum) -> + write(CsumT, Filename, Offset, Size, CSum, undefined, undefined). -trim(CsumT, Offset, Size, LeftUpdate, RightUpdate) -> - write(CsumT, Offset, Size, +trim(CsumT, Filename, Offset, Size, LeftUpdate, RightUpdate) -> + write(CsumT, Filename, Offset, Size, trimmed, %% Should this be much smaller like $t or just 't' LeftUpdate, RightUpdate). +-spec is_file_trimmed(table(), binary()) -> boolean(). +is_file_trimmed(#machi_csum_table{table=T}, Filename) when is_binary(Filename) -> + case eleveldb:get(T, sext:encode({file, Filename}), []) of + {ok, V} -> + (sext:decode(V) =:= ts); + _E -> + false + end. + %% @doc returns whether all bytes in a specific window is continously %% trimmed or not --spec all_trimmed(table(), non_neg_integer(), non_neg_integer()) -> boolean(). -all_trimmed(#machi_csum_table{table=T}, Left, Right) -> - FoldFun = fun({_, _}, false) -> +-spec all_trimmed(table(), binary(), non_neg_integer(), non_neg_integer()) -> boolean(). +all_trimmed(#machi_csum_table{table=T}, Filename, Left, Right) when is_binary(Filename) -> + FoldFun = fun({K, V}, false) -> false; ({K, V}, Pos) when is_integer(Pos) andalso Pos =< Right -> case {sext:decode(K), sext:decode(V)} of - {{Pos, Size}, trimmed} -> + {{file, _}, _} -> Pos; + {{Filename, Pos, Size}, trimmed} -> Pos + Size; - {{Offset, Size}, _} + {{Filename, Offset, Size}, _} when Offset + Size =< Left -> Left; _Eh -> @@ -176,65 +181,108 @@ all_trimmed(#machi_csum_table{table=T}, Left, Right) -> end end, case eleveldb:fold(T, FoldFun, Left, [{verify_checksums, true}]) of - false -> false; - Right -> true; - LastTrimmed when LastTrimmed < Right -> false; + false -> + false; + Right -> + true; + LastTrimmed when LastTrimmed < Right -> + false; _ -> %% LastTrimmed > Pos0, which is a irregular case but ok true end. -%% @doc returns whether all bytes 0-Pos0 is continously trimmed or -%% not, including header. --spec all_trimmed(table(), non_neg_integer()) -> boolean(). -all_trimmed(CsumT, Pos0) -> - all_trimmed(CsumT, 0, Pos0). - --spec any_trimmed(table(), +-spec any_trimmed(table(), binary(), pos_integer(), machi_dt:chunk_size()) -> boolean(). -any_trimmed(CsumT, Offset, Size) -> - Chunks = find(CsumT, Offset, Size), +any_trimmed(CsumT, Filename, Offset, Size) -> + Chunks = find(CsumT, Filename, Offset, Size), lists:any(fun({_, _, State}) -> State =:= trimmed end, Chunks). --spec calc_unwritten_bytes(table()) -> [byte_sequence()]. -calc_unwritten_bytes(#machi_csum_table{table=_} = CsumT) -> - case lists:sort(all(CsumT)) of +-spec calc_unwritten_bytes(table(), binary()) -> [byte_sequence()]. +calc_unwritten_bytes(#machi_csum_table{table=_} = CsumT, Filename) -> + case lists:sort(all(CsumT, Filename)) of [] -> - [{?MINIMUM_OFFSET, infinity}]; - Sorted -> - {LastOffset, _, _} = hd(Sorted), - build_unwritten_bytes_list(Sorted, LastOffset, []) + [{0, infinity}]; + [{0, _, _}|_] = Sorted -> + build_unwritten_bytes_list(Sorted, 0, []); + [{LastOffset, _, _}|_] = Sorted -> + build_unwritten_bytes_list(Sorted, LastOffset, [{0, LastOffset}]) end. -all(CsumT) -> +all(CsumT, Filename) -> FoldFun = fun(E, Acc) -> [E|Acc] end, - lists:reverse(foldl_chunks(FoldFun, [], CsumT)). + lists:reverse(foldl_file_chunks(FoldFun, [], CsumT, Filename)). + +all_files(#machi_csum_table{table=T}) -> + FoldFun = fun({K, V}, Acc) -> + case sext:decode(K) of + {file, Filename} -> + [{binary_to_list(Filename), sext:decode(V)}|Acc]; + _ -> + Acc + end; + (_, Acc) -> Acc + end, + eleveldb_fold(T, sext:encode({file, ""}), sext:encode({file, [255,255,255]}), + FoldFun, []). -spec close(table()) -> ok. close(#machi_csum_table{table=T}) -> ok = eleveldb:close(T). --spec delete(table()) -> ok. -delete(#machi_csum_table{table=T, file=F}) -> - catch eleveldb:close(T), - %% TODO change this to directory walk - case os:cmd("rm -rf " ++ F) of - "" -> ok; - E -> E + +-spec maybe_trim_file(table(), binary(), non_neg_integer()) -> + {ok, trimmed|not_trimmed} | {error, term()}. +maybe_trim_file(#machi_csum_table{table=T} = CsumT, Filename, EofP) when is_binary(Filename) -> + %% TODO: optimize; this code runs fold on eleveldb twice. + case all_trimmed(CsumT, Filename, 0, EofP) of + true -> + + Chunks = all(CsumT, Filename), + DeleteOps = lists:map(fun({O, L, _}) -> + {delete, sext:encode({Filename, O, L})} + end, Chunks), + FileTombstone = {put, sext:encode({file, Filename}), sext:encode(ts)}, + case eleveldb:write(T, [FileTombstone|DeleteOps], [{sync, true}]) of + ok -> {ok, trimmed}; + Other -> Other + end; + false -> + {ok, not_trimmed} end. --spec foldl_chunks(fun((chunk(), Acc0 :: term()) -> Acc :: term()), - Acc0 :: term(), table()) -> Acc :: term(). -foldl_chunks(Fun, Acc0, #machi_csum_table{table=T}) -> +%% @doc Folds over all chunks of a file +-spec foldl_file_chunks(fun((chunk(), Acc0 :: term()) -> Acc :: term()), + Acc0 :: term(), table(), binary()) -> Acc :: term(). +foldl_file_chunks(Fun, Acc0, #machi_csum_table{table=T}, Filename) when is_binary(Filename) -> FoldFun = fun({K, V}, Acc) -> - {Offset, Len} = sext:decode(K), + {Filename, Offset, Len} = sext:decode(K), Fun({Offset, Len, sext:decode(V)}, Acc); (_K, Acc) -> _ = lager:error("~p: wrong option?", [_K]), Acc end, + StartKey = {Filename, 0, 0}, + EndKey = { <>, 0, 0}, + eleveldb_fold(T, sext:encode(StartKey), sext:encode(EndKey), + FoldFun, Acc0). + + +%% @doc Folds over all chunks of all files +-spec foldl_chunks(fun((chunk(), Acc0 :: term()) -> Acc :: term()), + Acc0 :: term(), table()) -> Acc :: term(). +foldl_chunks(Fun, Acc0, #machi_csum_table{table=T}) -> + FoldFun = fun({K, V}, Acc) -> + {Filename, Offset, Len} = sext:decode(K), + Fun({Filename, Offset, Len, sext:decode(V)}, Acc); + (_K, Acc) -> + _ = lager:error("~p: wrong option?", [_K]), + Acc + end, eleveldb:fold(T, FoldFun, Acc0, [{verify_checksums, true}]). +%% == internal functions == + -spec build_unwritten_bytes_list( CsumData :: [{ Offset :: non_neg_integer(), Size :: pos_integer(), Checksum :: binary() }], @@ -298,3 +346,50 @@ eleveldb_do_fold({error, iterator_closed}, _, _, _, Acc) -> eleveldb_do_fold({error, invalid_iterator}, _, _, _, Acc) -> %% Probably reached to end Acc. + + +%% Key1 < MaybeStartKey =< Key +%% FirstKey =< MaybeStartKey +search_for_start_key(T, Filename, Offset, Size) -> + MaybeStartKey = sext:encode({Filename, Offset, Size}), + FirstKey = sext:encode({Filename, 0, 0}), + {ok, I} = eleveldb:iterator(T, [], keys_only), + + try + case eleveldb:iterator_move(I, MaybeStartKey) of + {error, invalid_iterator} -> + %% No key in right - go for probably first key in the file + case eleveldb:iterator_move(I, FirstKey) of + {error, _} -> undefined; + {ok, Key0} -> goto_end(I, Key0, Offset) + end; + {ok, Key} when Key < FirstKey -> + FirstKey; + {ok, Key} -> + case eleveldb:iterator_move(I, prev) of + {error, invalid_iterator} -> + Key; + {ok, Key1} when Key1 < FirstKey -> + Key; + {ok, Key1} -> + Key1 + end + end + after + _ = eleveldb:iterator_close(I) + end. + +goto_end(I, Key, Offset) -> + case sext:decode(Key) of + {_Filename, O, L} when Offset =< O + L -> + Key; + {_Filename, O, L} when O + L < Offset -> + case eleveldb:iterator_move(I, next) of + {ok, NextKey} -> + goto_end(I, NextKey, Offset); + {error, _} -> + Key + end + end. + + diff --git a/src/machi_file_proxy.erl b/src/machi_file_proxy.erl index cae292c..e0a4f44 100644 --- a/src/machi_file_proxy.erl +++ b/src/machi_file_proxy.erl @@ -85,8 +85,6 @@ filename :: string() | undefined, data_path :: string() | undefined, wedged = false :: boolean(), - csum_file :: string()|undefined, - csum_path :: string()|undefined, data_filehandle :: file:io_device(), csum_table :: machi_csum_table:table(), eof_position = 0 :: non_neg_integer(), @@ -102,12 +100,14 @@ %% Public API -% @doc Start a new instance of the file proxy service. Takes the filename -% and data directory as arguments. This function is typically called by the -% `machi_file_proxy_sup:start_proxy/2' function. --spec start_link(FluName :: atom(), Filename :: string(), DataDir :: string()) -> any(). -start_link(FluName, Filename, DataDir) -> - gen_server:start_link(?MODULE, {FluName, Filename, DataDir}, []). +% @doc Start a new instance of the file proxy service. Takes the +% filename and data directory as arguments. This function is typically +% called by the `machi_file_proxy_sup:start_proxy/2' +% function. Checksum table is also passed at startup. +-spec start_link(Filename :: string(), + DataDir :: string(), CsumTable :: machi_csum_table:table()) -> any(). +start_link(Filename, DataDir, CsumTable) -> + gen_server:start_link(?MODULE, {Filename, DataDir, CsumTable}, []). % @doc Request to stop an instance of the file proxy service. -spec stop(Pid :: pid()) -> ok. @@ -218,26 +218,24 @@ checksum_list(Pid) -> %% gen_server callbacks % @private -init({FluName, Filename, DataDir}) -> - CsumFile = machi_util:make_checksum_filename(DataDir, Filename), +init({Filename, DataDir, CsumTable}) -> {_, DPath} = machi_util:make_data_filename(DataDir, Filename), - ok = filelib:ensure_dir(CsumFile), ok = filelib:ensure_dir(DPath), - {ok, CsumTable} = machi_csum_table:open(CsumFile, []), - UnwrittenBytes = machi_csum_table:calc_unwritten_bytes(CsumTable), + CsumTable1 = case machi_csum_table:is_file_trimmed(CsumTable, list_to_binary(Filename)) of + false -> CsumTable; + true -> trimmed + end, + + UnwrittenBytes = machi_csum_table:calc_unwritten_bytes(CsumTable, iolist_to_binary(Filename)), {Eof, infinity} = lists:last(UnwrittenBytes), {ok, FHd} = file:open(DPath, [read, write, binary, raw]), - %% Reserve for EC and stuff, to prevent eof when read - ok = file:pwrite(FHd, 0, binary:copy(<<"so what?">>, ?MINIMUM_OFFSET div 8)), Tref = schedule_tick(), St = #state{ - fluname = FluName, filename = Filename, data_dir = DataDir, data_path = DPath, - csum_file = CsumFile, data_filehandle = FHd, - csum_table = CsumTable, + csum_table = CsumTable1, tref = Tref, eof_position = Eof, max_file_size = machi_config:max_file_size()}, @@ -281,6 +279,13 @@ handle_call({read, _Offset, _Length, _}, _From, }) -> {reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}}; +handle_call({read, _Offset, _Length, _Opts}, _From, + State = #state{ + csum_table = trimmed, + reads = {T, Err} + }) -> + {reply, {error, trimmed}, State#state{reads = {T+1, Err+1}}}; + handle_call({read, Offset, Length, _Opts}, _From, State = #state{eof_position = Eof, reads = {T, Err} @@ -325,6 +330,11 @@ handle_call({write, _Offset, _ClientMeta, _Data}, _From, }) -> {reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}}; +handle_call({write, _, _, _}, _From, + State = #state{writes = {T, Err}, + csum_table = trimmed}) -> + {reply, {error, trimmed}, State#state{writes = {T + 1, Err + 1}}}; + handle_call({write, Offset, ClientMeta, Data}, _From, State = #state{filename = F, writes = {T, Err}, @@ -348,7 +358,7 @@ handle_call({write, Offset, ClientMeta, Data}, _From, {Error, Err + 1} end end, - {NewEof, infinity} = lists:last(machi_csum_table:calc_unwritten_bytes(CsumTable)), + {NewEof, infinity} = lists:last(machi_csum_table:calc_unwritten_bytes(CsumTable, iolist_to_binary(F))), lager:debug("Wrote ~p bytes at ~p of file ~p, NewEOF = ~p~n", [iolist_size(Data), Offset, F, NewEof]), {reply, Resp, State#state{writes = {T+1, NewErr}, @@ -365,35 +375,33 @@ handle_call({trim, _Offset, _ClientMeta, _Data}, _From, handle_call({trim, Offset, Size, _TriggerGC}, _From, State = #state{data_filehandle=FHd, + filename=Filename, ops = Ops, trims = {T, Err}, csum_table = CsumTable}) -> - case machi_csum_table:all_trimmed(CsumTable, Offset, Offset+Size) of - true -> - NewState = State#state{ops=Ops+1, trims={T, Err+1}}, - %% All bytes of that range was already trimmed returns ok - %% here, not {error, trimmed}, which means the whole file - %% was trimmed + F = iolist_to_binary(Filename), + LUpdate = maybe_regenerate_checksum( + FHd, + machi_csum_table:find_leftneighbor(CsumTable, + F, + Offset)), + RUpdate = maybe_regenerate_checksum( + FHd, + machi_csum_table:find_rightneighbor(CsumTable, + F, + Offset+Size)), + + case machi_csum_table:trim(CsumTable, F, Offset, + Size, LUpdate, RUpdate) of + ok -> + {NewEof, infinity} = lists:last(machi_csum_table:calc_unwritten_bytes(CsumTable, F)), + NewState = State#state{ops=Ops+1, + trims={T+1, Err}, + eof_position=NewEof}, maybe_gc(ok, NewState); - false -> - LUpdate = maybe_regenerate_checksum( - FHd, - machi_csum_table:find_leftneighbor(CsumTable, Offset)), - RUpdate = maybe_regenerate_checksum( - FHd, - machi_csum_table:find_rightneighbor(CsumTable, Offset+Size)), - - case machi_csum_table:trim(CsumTable, Offset, Size, LUpdate, RUpdate) of - ok -> - {NewEof, infinity} = lists:last(machi_csum_table:calc_unwritten_bytes(CsumTable)), - NewState = State#state{ops=Ops+1, - trims={T+1, Err}, - eof_position=NewEof}, - maybe_gc(ok, NewState); - Error -> - {reply, Error, State#state{ops=Ops+1, trims={T, Err+1}}} - end + Error -> + {reply, Error, State#state{ops=Ops+1, trims={T, Err+1}}} end; %% APPENDS @@ -435,8 +443,9 @@ handle_call({append, ClientMeta, Extra, Data}, _From, {reply, Resp, State#state{appends = {T+1, NewErr}, eof_position = NewEof}}; -handle_call({checksum_list}, _FRom, State = #state{csum_table=T}) -> - All = machi_csum_table:all(T), +handle_call({checksum_list}, _FRom, State = #state{filename=Filename, + csum_table=T}) -> + All = machi_csum_table:all(T,iolist_to_binary(Filename)), {reply, {ok, All}, State}; handle_call(Req, _From, State) -> @@ -528,7 +537,6 @@ handle_info(Req, State) -> % @private terminate(Reason, #state{filename = F, data_filehandle = FHd, - csum_table = T, reads = {RT, RE}, writes = {WT, WE}, appends = {AT, AE} @@ -544,14 +552,7 @@ terminate(Reason, #state{filename = F, _ -> ok = file:sync(FHd), ok = file:close(FHd) - end, - case T of - undefined -> - noop; %% file deleted - _ -> - ok = machi_csum_table:close(T) - end, - ok. + end. % @private code_change(_OldVsn, State, _Extra) -> @@ -622,7 +623,8 @@ check_or_make_tagged_csum(OtherTag, _ClientCsum, _Data) -> do_read(FHd, Filename, CsumTable, Offset, Size, _, _) -> %% Note that find/3 only returns overlapping chunks, both borders %% are not aligned to original Offset and Size. - ChunkCsums = machi_csum_table:find(CsumTable, Offset, Size), + ChunkCsums = machi_csum_table:find(CsumTable, iolist_to_binary(Filename), + Offset, Size), read_all_ranges(FHd, Filename, ChunkCsums, [], []). -spec read_all_ranges(file:io_device(), string(), @@ -700,7 +702,7 @@ read_all_ranges(FHd, Filename, [{Offset, Size, TaggedCsum}|T], ReadChunks, Trimm handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data) -> Size = iolist_size(Data), - case machi_csum_table:find(CsumTable, Offset, Size) of + case machi_csum_table:find(CsumTable, iolist_to_binary(Filename), Offset, Size) of [] -> %% Nothing should be there try do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data) @@ -723,6 +725,7 @@ handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data) -> ok; {ok, _Other} -> %% TODO: leave some debug/warning message here? + io:format(user, "baposdifa;lsdfkj<<<<<<<~n", []), {error, written} end; [{Offset, Size, OtherCsum}] -> @@ -731,11 +734,13 @@ handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data) -> " a check for unwritten bytes gave us checksum ~p" " but the data we were trying to write has checksum ~p", [Offset, Filename, OtherCsum, TaggedCsum]), + io:format(user, "baposdifa;lsdfkj*************8~n", []), {error, written}; _Chunks -> %% TODO: Do we try to read all continuous chunks to see %% wether its total checksum matches client-provided checksum? - case machi_csum_table:any_trimmed(CsumTable, Offset, Size) of + case machi_csum_table:any_trimmed(CsumTable, iolist_to_binary(Filename), + Offset, Size) of true -> %% More than a byte is trimmed, besides, do we %% have to return exact written bytes? No. Clients @@ -744,6 +749,7 @@ handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data) -> {error, trimmed}; false -> %% No byte is trimmed, but at least one byte is written + io:format(user, "baposdifa;lsdfkj*************8 ~p~n", [_Chunks]), {error, written} end end. @@ -761,6 +767,7 @@ handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data) -> do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data) -> case file:pwrite(FHd, Offset, Data) of ok -> + F = iolist_to_binary(Filename), lager:debug("Successful write in file ~p at offset ~p, length ~p", [Filename, Offset, Size]), @@ -769,11 +776,15 @@ do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data) -> %% as server_sha LUpdate = maybe_regenerate_checksum( FHd, - machi_csum_table:find_leftneighbor(CsumTable, Offset)), + machi_csum_table:find_leftneighbor(CsumTable, + F, + Offset)), RUpdate = maybe_regenerate_checksum( FHd, - machi_csum_table:find_rightneighbor(CsumTable, Offset+Size)), - ok = machi_csum_table:write(CsumTable, Offset, Size, + machi_csum_table:find_rightneighbor(CsumTable, + F, + Offset+Size)), + ok = machi_csum_table:write(CsumTable, F, Offset, Size, TaggedCsum, LUpdate, RUpdate), lager:debug("Successful write to checksum file for ~p", [Filename]), @@ -838,32 +849,27 @@ maybe_gc(Reply, S = #state{eof_position = Eof, lager:debug("The file is still small; not trying GC (Eof, MaxFileSize) = (~p, ~p)~n", [Eof, MaxFileSize]), {reply, Reply, S}; -maybe_gc(Reply, S = #state{fluname=FluName, - data_filehandle = FHd, +maybe_gc(Reply, S = #state{data_filehandle = FHd, data_dir = DataDir, filename = Filename, eof_position = Eof, csum_table=CsumTable}) -> - case machi_csum_table:all_trimmed(CsumTable, ?MINIMUM_OFFSET, Eof) of - true -> - lager:debug("GC? Let's do it: ~p.~n", [Filename]), - %% Before unlinking a file, it should inform - %% machi_flu_filename_mgr that this file is - %% deleted and mark it as "trimmed" to avoid - %% filename reuse and resurrection. Maybe garbage - %% will remain if a process crashed but it also - %% should be recovered at filename_mgr startup. + lager:debug("GC? Let's try it: ~p.~n", [Filename]), - %% Also, this should be informed *before* file proxy - %% deletes files. - ok = machi_flu_metadata_mgr:trim_file(FluName, {file, Filename}), + case machi_csum_table:maybe_trim_file(CsumTable, iolist_to_binary(Filename), Eof) of + {ok, trimmed} -> + %% Checksum table entries are all trimmed now, unlinking + %% file from operating system ok = file:close(FHd), {_, DPath} = machi_util:make_data_filename(DataDir, Filename), ok = file:delete(DPath), - machi_csum_table:delete(CsumTable), - {stop, normal, Reply, - S#state{data_filehandle=undefined, - csum_table=undefined}}; - false -> + lager:info("File ~s has been unlinked as all chunks" + " were trimmed.", [Filename]), + {stop, normal, Reply, S#state{data_filehandle=undefined}}; + {ok, not_trimmed} -> + {reply, Reply, S}; + {error, _} = Error -> + lager:error("machi_csum_table:maybe_trim_file/4 has been " + "unexpectedly failed: ~p", [Error]), {reply, Reply, S} end. diff --git a/src/machi_file_proxy_sup.erl b/src/machi_file_proxy_sup.erl index a165a68..7301b54 100644 --- a/src/machi_file_proxy_sup.erl +++ b/src/machi_file_proxy_sup.erl @@ -44,8 +44,9 @@ start_link(FluName) -> supervisor:start_link({local, make_proxy_name(FluName)}, ?MODULE, []). start_proxy(FluName, DataDir, Filename) -> + {ok, CsumTable} = machi_flu_filename_mgr:get_csum_table(FluName), supervisor:start_child(make_proxy_name(FluName), - [FluName, Filename, DataDir]). + [Filename, DataDir, CsumTable]). init([]) -> SupFlags = {simple_one_for_one, 1000, 10}, diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 6610230..029a5dd 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -392,17 +392,14 @@ do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, #state{flu_name=FluNa do_server_read_chunk(File, Offset, Size, Opts, #state{flu_name=FluName})-> case sanitize_file_string(File) of ok -> - case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}) of - {ok, Pid} -> - case machi_file_proxy:read(Pid, Offset, Size, Opts) of - %% XXX FIXME - %% For now we are omiting the checksum data because it blows up - %% protobufs. - {ok, ChunksAndTrimmed} -> {ok, ChunksAndTrimmed}; - Other -> Other - end; - {error, trimmed} = Error -> - Error + {ok, Pid} = machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}), + case machi_file_proxy:read(Pid, Offset, Size, Opts) of + %% XXX FIXME + %% For now we are omiting the checksum data because it blows up + %% protobufs. + {ok, ChunksAndTrimmed} -> {ok, ChunksAndTrimmed}; + Other -> + Other end; _ -> {error, bad_arg} diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 293fdc3..2b08071 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -48,13 +48,12 @@ -compile(export_all). -endif. --export([ - child_spec/2, - start_link/2, - find_or_make_filename_from_prefix/4, - increment_prefix_sequence/3, - list_files_by_prefix/2 - ]). +-export([child_spec/2, + start_link/2, + find_or_make_filename_from_prefix/4, + increment_prefix_sequence/3, + list_files_by_prefix/2, + get_csum_table/1]). %% gen_server callbacks -export([ @@ -72,7 +71,8 @@ -record(state, {fluname :: atom(), tid :: ets:tid(), datadir :: string(), - epoch :: pv1_epoch() + epoch :: pv1_epoch(), + csum_table :: machi_csum_table:table() }). %% public API @@ -126,13 +126,25 @@ list_files_by_prefix(_FluName, Other) -> lager:error("~p is not a valid prefix.", [Other]), error(badarg). +get_csum_table(FluName) when is_atom(FluName) -> + gen_server:call(make_filename_mgr_name(FluName), get_csum_table, ?TIMEOUT). + %% gen_server API init([FluName, DataDir]) -> Tid = ets:new(make_filename_mgr_name(FluName), [named_table, {read_concurrency, true}]), + + %% metadata includes checksums, offsets and filenames + CsumTableDir = filename:join(DataDir, "metadata"), + {ok, CsumTable} = machi_csum_table:open(CsumTableDir, []), + %% TODO make sure all files non-existent, if any remaining files + %% here, just delete it. They're in the list *because* they're all + %% trimmed. + {ok, #state{fluname = FluName, epoch = ?DUMMY_PV1_EPOCH, datadir = DataDir, - tid = Tid}}. + tid = Tid, + csum_table = CsumTable}}. handle_cast(Req, State) -> lager:warning("Got unknown cast ~p", [Req]), @@ -167,6 +179,9 @@ handle_call({list_files, Prefix}, From, S = #state{ datadir = DataDir }) -> end), {noreply, S}; +handle_call(get_csum_table, _From, S = #state{ csum_table = CsumTable }) -> + {reply, {ok, CsumTable}, S}; + handle_call(Req, From, State) -> lager:warning("Got unknown call ~p from ~p", [Req, From]), {reply, hoge, State}. @@ -175,8 +190,9 @@ handle_info(Info, State) -> lager:warning("Got unknown info ~p", [Info]), {noreply, State}. -terminate(Reason, _State) -> +terminate(Reason, _State = #state{ csum_table = CsumTable} ) -> lager:info("Shutting down because ~p", [Reason]), + ok = machi_csum_table:close(CsumTable), ok. code_change(_OldVsn, State, _Extra) -> diff --git a/src/machi_flu_metadata_mgr.erl b/src/machi_flu_metadata_mgr.erl index 66274b3..3b7bc4a 100644 --- a/src/machi_flu_metadata_mgr.erl +++ b/src/machi_flu_metadata_mgr.erl @@ -39,13 +39,10 @@ -define(HASH(X), erlang:phash2(X)). %% hash algorithm to use -define(TIMEOUT, 10 * 1000). %% 10 second timeout --define(KNOWN_FILES_LIST_PREFIX, "known_files_"). - -record(state, {fluname :: atom(), datadir :: string(), tid :: ets:tid(), - cnt :: non_neg_integer(), - trimmed_files :: machi_plist:plist() + cnt :: non_neg_integer() }). %% This record goes in the ets table where filename is the key @@ -62,8 +59,7 @@ lookup_proxy_pid/2, start_proxy_pid/2, stop_proxy_pid/2, - build_metadata_mgr_name/2, - trim_file/2 + build_metadata_mgr_name/2 ]). %% gen_server callbacks @@ -101,24 +97,15 @@ start_proxy_pid(FluName, {file, Filename}) -> stop_proxy_pid(FluName, {file, Filename}) -> gen_server:call(get_manager_atom(FluName, Filename), {stop_proxy_pid, Filename}, ?TIMEOUT). -trim_file(FluName, {file, Filename}) -> - gen_server:call(get_manager_atom(FluName, Filename), {trim_file, Filename}, ?TIMEOUT). - %% gen_server callbacks init([FluName, Name, DataDir, Num]) -> %% important: we'll need another persistent storage to %% remember deleted (trimmed) file, to prevent resurrection after %% flu restart and append. - FileListFileName = - filename:join([DataDir, ?KNOWN_FILES_LIST_PREFIX ++ atom_to_list(FluName)]), - {ok, PList} = machi_plist:open(FileListFileName, []), - %% TODO make sure all files non-existent, if any remaining files - %% here, just delete it. They're in the list *because* they're all - %% trimmed. Tid = ets:new(Name, [{keypos, 2}, {read_concurrency, true}, {write_concurrency, true}]), - {ok, #state{fluname = FluName, datadir = DataDir, tid = Tid, cnt = Num, - trimmed_files=PList}}. + + {ok, #state{fluname = FluName, datadir = DataDir, tid = Tid, cnt = Num}}. handle_cast(Req, State) -> lager:warning("Got unknown cast ~p", [Req]), @@ -132,23 +119,17 @@ handle_call({proxy_pid, Filename}, _From, State = #state{ tid = Tid }) -> {reply, Reply, State}; handle_call({start_proxy_pid, Filename}, _From, - State = #state{ fluname = N, tid = Tid, datadir = D, - trimmed_files=TrimmedFiles}) -> - case machi_plist:find(TrimmedFiles, Filename) of - false -> - NewR = case lookup_md(Tid, Filename) of - not_found -> - start_file_proxy(N, D, Filename); - #md{ proxy_pid = undefined } = R0 -> - start_file_proxy(N, D, R0); - #md{ proxy_pid = _Pid } = R1 -> - R1 - end, - update_ets(Tid, NewR), - {reply, {ok, NewR#md.proxy_pid}, State}; - true -> - {reply, {error, trimmed}, State} - end; + State = #state{ fluname = N, tid = Tid, datadir = D}) -> + NewR = case lookup_md(Tid, Filename) of + not_found -> + start_file_proxy(N, D, Filename); + #md{ proxy_pid = undefined } = R0 -> + start_file_proxy(N, D, R0); + #md{ proxy_pid = _Pid } = R1 -> + R1 + end, + update_ets(Tid, NewR), + {reply, {ok, NewR#md.proxy_pid}, State}; handle_call({stop_proxy_pid, Filename}, _From, State = #state{ tid = Tid }) -> case lookup_md(Tid, Filename) of @@ -163,15 +144,6 @@ handle_call({stop_proxy_pid, Filename}, _From, State = #state{ tid = Tid }) -> end, {reply, ok, State}; -handle_call({trim_file, Filename}, _, - S = #state{trimmed_files = TrimmedFiles }) -> - case machi_plist:add(TrimmedFiles, Filename) of - {ok, TrimmedFiles2} -> - {reply, ok, S#state{trimmed_files=TrimmedFiles2}}; - Error -> - {reply, Error, S} - end; - handle_call(Req, From, State) -> lager:warning("Got unknown call ~p from ~p", [Req, From]), {reply, hoge, State}. @@ -219,9 +191,8 @@ handle_info(Info, State) -> lager:warning("Got unknown info ~p", [Info]), {noreply, State}. -terminate(Reason, _State = #state{trimmed_files=TrimmedFiles}) -> +terminate(Reason, _State) -> lager:info("Shutting down because ~p", [Reason]), - machi_plist:close(TrimmedFiles), ok. code_change(_OldVsn, State, _Extra) -> @@ -253,7 +224,7 @@ lookup_md(Tid, Data) -> [R] -> R end. -start_file_proxy(FluName, D, R = #md{filename = F} ) -> +start_file_proxy(FluName, D, R = #md{filename = F}) -> {ok, Pid} = machi_file_proxy_sup:start_proxy(FluName, D, F), Mref = monitor(process, Pid), R#md{ proxy_pid = Pid, mref = Mref }; diff --git a/src/machi_plist.erl b/src/machi_plist.erl deleted file mode 100644 index 7750b0a..0000000 --- a/src/machi_plist.erl +++ /dev/null @@ -1,69 +0,0 @@ --module(machi_plist). - -%%% @doc persistent list of binaries - --export([open/2, close/1, find/2, add/2]). - --ifdef(TEST). --export([all/1]). --endif. - --record(machi_plist, - {filename :: file:filename_all(), - fd :: file:io_device(), - list = [] :: list(string)}). - --type plist() :: #machi_plist{}. --export_type([plist/0]). - --spec open(file:filename_all(), proplists:proplist()) -> - {ok, plist()} | {error, file:posix()}. -open(Filename, _Opt) -> - %% TODO: This decode could fail if the file didn't finish writing - %% whole contents, which should be fixed by some persistent - %% solution. - List = case file:read_file(Filename) of - {ok, <<>>} -> []; - {ok, Bin} -> binary_to_term(Bin); - {error, enoent} -> [] - end, - case file:open(Filename, [read, write, raw, binary, sync]) of - {ok, Fd} -> - {ok, #machi_plist{filename=Filename, - fd=Fd, - list=List}}; - Error -> - Error - end. - --spec close(plist()) -> ok. -close(#machi_plist{fd=Fd}) -> - _ = file:close(Fd). - --spec find(plist(), string()) -> boolean(). -find(#machi_plist{list=List}, Name) -> - lists:member(Name, List). - --spec add(plist(), string()) -> {ok, plist()} | {error, file:posix()}. -add(Plist = #machi_plist{list=List0, fd=Fd}, Name) -> - case find(Plist, Name) of - true -> - {ok, Plist}; - false -> - List = lists:append(List0, [Name]), - %% TODO: partial write could break the file with other - %% persistent info (even lose data of trimmed states); - %% needs a solution. - case file:pwrite(Fd, 0, term_to_binary(List)) of - ok -> - {ok, Plist#machi_plist{list=List}}; - Error -> - Error - end - end. - --ifdef(TEST). --spec all(plist()) -> [file:filename()]. -all(#machi_plist{list=List}) -> - List. --endif. diff --git a/test/machi_csum_table_test.erl b/test/machi_csum_table_test.erl index f2b7a4f..82c499d 100644 --- a/test/machi_csum_table_test.erl +++ b/test/machi_csum_table_test.erl @@ -2,68 +2,69 @@ -compile(export_all). -include_lib("eunit/include/eunit.hrl"). --define(HDR, {0, 1024, none}). cleanup(Dir) -> os:cmd("rm -rf " ++ Dir). smoke_test() -> - Filename = "./temp-checksum-dumb-file", - _ = cleanup(Filename), - {ok, MC} = machi_csum_table:open(Filename, []), - ?assertEqual([{1024, infinity}], - machi_csum_table:calc_unwritten_bytes(MC)), + DBFile = "./temp-checksum-dumb-file", + Filename = <<"/some/puppy/and/cats^^^42">>, + _ = cleanup(DBFile), + {ok, MC} = machi_csum_table:open(DBFile, []), + ?assertEqual([{0, infinity}], + machi_csum_table:calc_unwritten_bytes(MC, Filename)), Entry = {Offset, Size, Checksum} = {1064, 34, <<"deadbeef">>}, - [] = machi_csum_table:find(MC, Offset, Size), - ok = machi_csum_table:write(MC, Offset, Size, Checksum), - [{1024, 40}, {1098, infinity}] = machi_csum_table:calc_unwritten_bytes(MC), - ?assertEqual([Entry], machi_csum_table:find(MC, Offset, Size)), - ok = machi_csum_table:trim(MC, Offset, Size, undefined, undefined), + [] = machi_csum_table:find(MC, Filename, Offset, Size), + ok = machi_csum_table:write(MC, Filename, Offset, Size, Checksum), + [{0, 1064}, {1098, infinity}] = machi_csum_table:calc_unwritten_bytes(MC, Filename), + ?assertEqual([Entry], machi_csum_table:find(MC, Filename, Offset, Size)), + ok = machi_csum_table:trim(MC, Filename, Offset, Size, undefined, undefined), ?assertEqual([{Offset, Size, trimmed}], - machi_csum_table:find(MC, Offset, Size)), - ok = machi_csum_table:close(MC), - ok = machi_csum_table:delete(MC). + machi_csum_table:find(MC, Filename, Offset, Size)), + ok = machi_csum_table:close(MC). close_test() -> - Filename = "./temp-checksum-dumb-file-2", - _ = cleanup(Filename), - {ok, MC} = machi_csum_table:open(Filename, []), + DBFile = "./temp-checksum-dumb-file-2", + Filename = <<"/some/puppy/and/cats^^^43">>, + _ = cleanup(DBFile), + {ok, MC} = machi_csum_table:open(DBFile, []), Entry = {Offset, Size, Checksum} = {1064, 34, <<"deadbeef">>}, - [] = machi_csum_table:find(MC, Offset, Size), - ok = machi_csum_table:write(MC, Offset, Size, Checksum), - [Entry] = machi_csum_table:find(MC, Offset, Size), + [] = machi_csum_table:find(MC, Filename, Offset, Size), + ok = machi_csum_table:write(MC, Filename, Offset, Size, Checksum), + [Entry] = machi_csum_table:find(MC, Filename, Offset, Size), ok = machi_csum_table:close(MC), - {ok, MC2} = machi_csum_table:open(Filename, []), - [Entry] = machi_csum_table:find(MC2, Offset, Size), - ok = machi_csum_table:trim(MC2, Offset, Size, undefined, undefined), - [{Offset, Size, trimmed}] = machi_csum_table:find(MC2, Offset, Size), - ok = machi_csum_table:delete(MC2). + {ok, MC2} = machi_csum_table:open(DBFile, []), + [Entry] = machi_csum_table:find(MC2, Filename, Offset, Size), + ok = machi_csum_table:trim(MC2, Filename, Offset, Size, undefined, undefined), + [{Offset, Size, trimmed}] = machi_csum_table:find(MC2, Filename, Offset, Size), + ok = machi_csum_table:close(MC2). smoke2_test() -> - Filename = "./temp-checksum-dumb-file-3", - _ = cleanup(Filename), - {ok, MC} = machi_csum_table:open(Filename, []), + DBFile = "./temp-checksum-dumb-file-3", + Filename = <<"/some/puppy/and/cats^^^43">>, + _ = cleanup(DBFile), + {ok, MC} = machi_csum_table:open(DBFile, []), Entry = {Offset, Size, Checksum} = {1025, 10, <<"deadbeef">>}, - ok = machi_csum_table:write(MC, Offset, Size, Checksum), - ?assertEqual([], machi_csum_table:find(MC, 0, 0)), - ?assertEqual([?HDR], machi_csum_table:find(MC, 0, 1)), - [Entry] = machi_csum_table:find(MC, Offset, Size), - [?HDR] = machi_csum_table:find(MC, 1, 1024), - ?assertEqual([?HDR, Entry], - machi_csum_table:find(MC, 1023, 1024)), - [Entry] = machi_csum_table:find(MC, 1024, 1024), - [Entry] = machi_csum_table:find(MC, 1025, 1024), + ok = machi_csum_table:write(MC, Filename, Offset, Size, Checksum), + ?assertEqual([], machi_csum_table:find(MC, Filename, 0, 0)), + ?assertEqual([], machi_csum_table:find(MC, Filename, 0, 1)), + [Entry] = machi_csum_table:find(MC, Filename, Offset, Size), + [] = machi_csum_table:find(MC, Filename, 1, 1024), + ?assertEqual([Entry], + machi_csum_table:find(MC, Filename, 1023, 1024)), + [Entry] = machi_csum_table:find(MC, Filename, 1024, 1024), + [Entry] = machi_csum_table:find(MC, Filename, 1025, 1024), - ok = machi_csum_table:trim(MC, Offset, Size, undefined, undefined), - [{Offset, Size, trimmed}] = machi_csum_table:find(MC, Offset, Size), - ok = machi_csum_table:close(MC), - ok = machi_csum_table:delete(MC). + ok = machi_csum_table:trim(MC, Filename, Offset, Size, undefined, undefined), + [{Offset, Size, trimmed}] = machi_csum_table:find(MC, Filename, Offset, Size), + ok = machi_csum_table:close(MC). smoke3_test() -> - Filename = "./temp-checksum-dumb-file-4", - _ = cleanup(Filename), - {ok, MC} = machi_csum_table:open(Filename, []), + DBFile = "./temp-checksum-dumb-file-4", + Filename = <<"/some/puppy/and/cats^^^44">>, + _ = cleanup(DBFile), + {ok, MC} = machi_csum_table:open(DBFile, []), Scenario = [%% Command, {Offset, Size, Csum}, LeftNeighbor, RightNeibor {?LINE, write, {2000, 10, <<"heh">>}, undefined, undefined}, @@ -84,9 +85,9 @@ smoke3_test() -> %% ?debugVal({_Line, Chunk}), {Offset, Size, Csum} = Chunk, ?assertEqual(LeftN0, - machi_csum_table:find_leftneighbor(MC, Offset)), + machi_csum_table:find_leftneighbor(MC, Filename, Offset)), ?assertEqual(RightN0, - machi_csum_table:find_rightneighbor(MC, Offset+Size)), + machi_csum_table:find_rightneighbor(MC, Filename, Offset+Size)), LeftN = case LeftN0 of {OffsL, SizeL, trimmed} -> {OffsL, SizeL, trimmed}; {OffsL, SizeL, _} -> {OffsL, SizeL, <<"boom">>}; @@ -98,19 +99,18 @@ smoke3_test() -> end, case Cmd of write -> - ok = machi_csum_table:write(MC, Offset, Size, Csum, + ok = machi_csum_table:write(MC, Filename, Offset, Size, Csum, LeftN, RightN); trim -> - ok = machi_csum_table:trim(MC, Offset, Size, + ok = machi_csum_table:trim(MC, Filename, Offset, Size, LeftN, RightN) end end || {_Line, Cmd, Chunk, LeftN0, RightN0} <- Scenario ], - ?assert(not machi_csum_table:all_trimmed(MC, 10000)), - machi_csum_table:trim(MC, 0, 10000, undefined, undefined), - ?assert(machi_csum_table:all_trimmed(MC, 10000)), + ?assert(not machi_csum_table:all_trimmed(MC, Filename, 0, 10000)), + machi_csum_table:trim(MC, Filename, 0, 10000, undefined, undefined), + ?assert(machi_csum_table:all_trimmed(MC, Filename, 0, 10000)), - ok = machi_csum_table:close(MC), - ok = machi_csum_table:delete(MC). + ok = machi_csum_table:close(MC). %% TODO: add quickcheck test here diff --git a/test/machi_file_proxy_eqc.erl b/test/machi_file_proxy_eqc.erl index dd36787..53337a2 100644 --- a/test/machi_file_proxy_eqc.erl +++ b/test/machi_file_proxy_eqc.erl @@ -116,11 +116,11 @@ get_written_interval(L) -> initial_state() -> {_, _, MS} = os:timestamp(), Filename = test_server:temp_name("eqc_data") ++ "." ++ integer_to_list(MS), - #state{filename=Filename, written=[{0,1024}]}. + #state{filename=Filename, written=[]}. initial_state(I, T) -> S=initial_state(), - S#state{written=[{0,1024}], + S#state{written=[], planned_writes=I, planned_trims=T}. @@ -230,7 +230,8 @@ start_command(S) -> {call, ?MODULE, start, [S]}. start(#state{filename=File}) -> - {ok, Pid} = machi_file_proxy:start_link(some_flu, File, ?TESTDIR), + CsumT = get_csum_table(), + {ok, Pid} = machi_file_proxy:start_link(File, ?TESTDIR, CsumT), unlink(Pid), Pid. @@ -432,6 +433,40 @@ stop_post(_, _, _) -> true. stop_next(S, _, _) -> S#state{pid=undefined, prev_extra=0}. +csum_table_holder() -> + Parent = self(), + spawn_link(fun() -> + CsumFile = test_server:temp_name("eqc_data-csum"), + filelib:ensure_dir(CsumFile), + {ok, CsumT} = machi_csum_table:open(CsumFile, []), + erlang:register(csum_table_holder, self()), + Parent ! ok, + csum_table_holder_loop(CsumT), + machi_csum_table:close(CsumT), + erlang:unregister(csum_table_holder) + end), + receive + Other -> Other + after 1000 -> + timeout + end. + +csum_table_holder_loop(CsumT) -> + receive + {get, From} -> + From ! CsumT; + stop -> + ok + end. + +get_csum_table() -> + csum_table_holder ! {get, self()}, + receive CsumT -> CsumT + end. + +stop_csum_table_holder() -> + catch csum_table_holder ! stop. + %% Property prop_ok() -> @@ -440,7 +475,9 @@ prop_ok() -> {shuffle_interval(), shuffle_interval()}, ?FORALL(Cmds, parallel_commands(?MODULE, initial_state(I, T)), begin + ok = csum_table_holder(), {H, S, Res} = run_parallel_commands(?MODULE, Cmds), + stop_csum_table_holder(), cleanup(), pretty_commands(?MODULE, Cmds, {H, S, Res}, aggregate(command_names(Cmds), Res == ok)) diff --git a/test/machi_file_proxy_test.erl b/test/machi_file_proxy_test.erl index a04d880..727ca35 100644 --- a/test/machi_file_proxy_test.erl +++ b/test/machi_file_proxy_test.erl @@ -77,25 +77,28 @@ random_binary(Start, End) -> end. setup() -> - {ok, Pid} = machi_file_proxy:start_link(fluname, "test", ?TESTDIR), - Pid. + {ok, CsumT} = machi_csum_table:open(filename:join([?TESTDIR, "csumfile"]), []), + {ok, Pid} = machi_file_proxy:start_link("test", ?TESTDIR, CsumT), + {Pid, CsumT}. -teardown(Pid) -> - catch machi_file_proxy:stop(Pid). +teardown({Pid, CsumT}) -> + catch machi_file_proxy:stop(Pid), + catch machi_csum_table:close(CsumT). machi_file_proxy_test_() -> clean_up_data_dir(?TESTDIR), {setup, fun setup/0, fun teardown/1, - fun(Pid) -> + fun({Pid, _}) -> [ ?_assertEqual({error, bad_arg}, machi_file_proxy:read(Pid, -1, -1)), ?_assertEqual({error, bad_arg}, machi_file_proxy:write(Pid, -1, <<"yo">>)), ?_assertEqual({error, bad_arg}, machi_file_proxy:append(Pid, [], -1, <<"krep">>)), - ?_assertMatch({ok, {_, []}}, machi_file_proxy:read(Pid, 1, 1)), + ?_assertMatch({error, not_written}, machi_file_proxy:read(Pid, 1, 1)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1024, 1)), - ?_assertMatch({ok, {_, []}}, machi_file_proxy:read(Pid, 1, 1024)), + ?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, random_binary(0, 1024))), + ?_assertMatch({ok, _}, machi_file_proxy:read(Pid, 1, 1024)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1024, ?HYOOGE)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, ?HYOOGE, 1)), {timeout, 10, @@ -114,7 +117,7 @@ multiple_chunks_read_test_() -> {setup, fun setup/0, fun teardown/1, - fun(Pid) -> + fun({Pid, _}) -> [ ?_assertEqual(ok, machi_file_proxy:trim(Pid, 0, 1, false)), ?_assertMatch({ok, {[], [{"test", 0, 1}]}}, diff --git a/test/machi_flu1_test.erl b/test/machi_flu1_test.erl index a1d098a..99d7887 100644 --- a/test/machi_flu1_test.erl +++ b/test/machi_flu1_test.erl @@ -122,8 +122,8 @@ flu_smoke_test() -> {ok, [{_,File1}]} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), Len1 = size(Chunk1), {error, not_written} = ?FLU_C:read_chunk(Host, TcpPort, - ?DUMMY_PV1_EPOCH, - File1, Off1*983829323, Len1, []), + ?DUMMY_PV1_EPOCH, + File1, Off1*983829323, Len1, []), %% XXX FIXME %% %% This is failing because the read extends past the end of the file. diff --git a/test/machi_pb_high_client_test.erl b/test/machi_pb_high_client_test.erl index 16b125c..85ed92b 100644 --- a/test/machi_pb_high_client_test.erl +++ b/test/machi_pb_high_client_test.erl @@ -91,10 +91,7 @@ smoke_test2() -> #p_srvr{name=Name, props=Props} = P, Dir = proplists:get_value(data_dir, Props), ?assertEqual({ok, [File1Bin]}, - file:list_dir(filename:join([Dir, "data"]))), - FileListFileName = filename:join([Dir, "known_files_" ++ atom_to_list(Name)]), - {ok, Plist} = machi_plist:open(FileListFileName, []), - ?assertEqual([], machi_plist:all(Plist)) + file:list_dir(filename:join([Dir, "data"]))) end || P <- Ps], [begin @@ -118,12 +115,11 @@ smoke_test2() -> File = binary_to_list(Filex), [begin #p_srvr{name=Name, props=Props} = P, - Dir = proplists:get_value(data_dir, Props), - ?assertEqual({ok, []}, - file:list_dir(filename:join([Dir, "data"]))), - FileListFileName = filename:join([Dir, "known_files_" ++ atom_to_list(Name)]), - {ok, Plist} = machi_plist:open(FileListFileName, []), - ?assertEqual([File], machi_plist:all(Plist)) + DataDir = filename:join([proplists:get_value(data_dir, Props), "data"]), + ?assertEqual({ok, []}, file:list_dir(DataDir)), + {ok, CsumT} = machi_flu_filename_mgr:get_csum_table(Name), + Plist = machi_csum_table:all_files(CsumT), + ?assertEqual([{File, ts}], Plist) end || P <- Ps], [begin diff --git a/test/machi_plist_test.erl b/test/machi_plist_test.erl deleted file mode 100644 index a796c1b..0000000 --- a/test/machi_plist_test.erl +++ /dev/null @@ -1,17 +0,0 @@ --module(machi_plist_test). - --include_lib("eunit/include/eunit.hrl"). - -open_close_test() -> - FileName = "bark-bark-one", - file:delete(FileName), - {ok, PList0} = machi_plist:open(FileName, []), - {ok, PList1} = machi_plist:add(PList0, "boomar"), - ?assertEqual(["boomar"], machi_plist:all(PList1)), - ok = machi_plist:close(PList1), - - {ok, PList2} = machi_plist:open(FileName, []), - ?assertEqual(["boomar"], machi_plist:all(PList2)), - ok = machi_plist:close(PList2), - file:delete(FileName), - ok. -- 2.45.2 From 3bd575899f6c61b7a34d75d97847022460872f26 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 10 Feb 2016 16:39:57 +0900 Subject: [PATCH 21/53] WIP: narrowing in on repair problems due to double-write errors 2 --- src/machi_chain_manager1.erl | 11 ++++++----- src/machi_cr_client.erl | 11 +++++++++++ src/machi_flu1_append_server.erl | 7 +++++++ src/machi_flu_filename_mgr.erl | 27 +++++++++++++++++---------- test/machi_ap_repair_eqc.erl | 5 ++++- 5 files changed, 45 insertions(+), 16 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 075d834..b124cbd 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -1909,7 +1909,7 @@ react_to_env_C100_inner(Author_latest, NotSanesDict0, _MyName, S2 = S#ch_mgr{not_sanes=NotSanesDict, sane_transitions=0}, case orddict:fetch(Author_latest, NotSanesDict) of N when N > ?TOO_FREQUENT_BREAKER -> - %% ?V("\n\nYOYO ~w breaking the cycle of:\n current: ~w\n new : ~w\n", [_MyName, machi_projection:make_summary(S#ch_mgr.proj), machi_projection:make_summary(P_latest)]), + ?V("\n\nYOYO ~w breaking the cycle insane-freq=~w by-author=~w of:\n current: ~w\n new : ~w\n", [_MyName, N, Author_latest, machi_projection:make_summary(S#ch_mgr.proj), machi_projection:make_summary(P_latest)]), ?REACT({c100, ?LINE, [{not_sanes_author_count, N}]}), react_to_env_C103(P_newprop, P_latest, P_current_calc, S2); N -> @@ -1937,7 +1937,8 @@ react_to_env_C103(#projection_v1{epoch_number=_Epoch_newprop} = _P_newprop, ?REACT({c103, ?LINE, [{current_epoch, P_current#projection_v1.epoch_number}, {none_projection_epoch, P_none#projection_v1.epoch_number}]}), - io:format(user, "SET add_admin_down(~w) at ~w =====================================\n", [MyName, time()]), + io:format(user, "SET add_admin_down(~w) at ~w TODO current_epoch ~w none_proj_epoch ~w =====================================\n", [MyName, time(), P_current#projection_v1.epoch_number, P_none#projection_v1.epoch_number]), + %% io:format(user, "SET add_admin_down(~w) at ~w =====================================\n", [MyName, time()]), machi_fitness:add_admin_down(S#ch_mgr.fitness_svr, MyName, []), timer:sleep(5*1000), io:format(user, "SET delete_admin_down(~w) at ~w =====================================\n", [MyName, time()]), @@ -2985,9 +2986,9 @@ perhaps_verbose_c111(P_latest2, S) -> true -> ok end, - %% TODO put me back: case proplists:get_value(private_write_verbose, - %% S#ch_mgr.opts) of - case true of + case proplists:get_value(private_write_verbose, + S#ch_mgr.opts) of + %% case true of true when Summ2 /= Last2 -> put(last_verbose, Summ2), ?V("\n~s ~p uses plain: ~w \n", diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 19a6d1a..59a8c72 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -299,16 +299,20 @@ do_append_head3(NSInfo, Prefix, case ?FLU_PC:append_chunk(Proxy, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, ?TIMEOUT) of {ok, {Offset, _Size, File}=_X} -> +io:format(user, "CLNT append_chunk: head ~w ok\n ~p\n hd ~p rest ~p epoch ~P\n", [HeadFLU, _X, HeadFLU, RestFLUs, EpochID, 8]), do_wr_app_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, [HeadFLU], 0, STime, TO, append, S); {error, bad_checksum}=BadCS -> +io:format(user, "CLNT append_chunk: head ~w BAD CS\n", [HeadFLU]), {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> +io:format(user, "CLNT append_chunk: head ~w error ~p\n", [HeadFLU, Retry]), do_append_head(NSInfo, Prefix, Chunk, CSum, Opts, Depth, STime, TO, S); {error, written} -> +io:format(user, "CLNT append_chunk: head ~w Written\n", [HeadFLU]), %% Implicit sequencing + this error = we don't know where this %% written block is. But we lost a race. Repeat, with a new %% sequencer assignment. @@ -387,26 +391,32 @@ do_wr_app_midtail2([FLU|RestFLUs]=FLUs, NSInfo, CSum, Opts, Ws, Depth, STime, TO, MyOp, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(FLU, PD), +io:format(user, "CLNT append_chunk: mid/tail ~w\n", [FLU]), case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, CSum, ?TIMEOUT) of ok -> +io:format(user, "CLNT append_chunk: mid/tail ~w ok\n", [FLU]), do_wr_app_midtail2(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, [FLU|Ws], Depth, STime, TO, MyOp, S); {error, bad_checksum}=BadCS -> +io:format(user, "CLNT append_chunk: mid/tail ~w BAD CS\n", [FLU]), %% TODO: alternate strategy? {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> +io:format(user, "CLNT append_chunk: mid/tail ~w error ~p\n", [FLU, Retry]), do_wr_app_midtail(FLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, Ws, Depth, STime, TO, MyOp, S); {error, written} -> +io:format(user, "CLNT append_chunk: mid/tail ~w WRITTEN\n", [FLU]), %% We know what the chunk ought to be, so jump to the %% middle of read-repair. Resume = {append, Offset, iolist_size(Chunk), File}, do_repair_chunk(FLUs, Resume, Chunk, CSum, [], NSInfo, File, Offset, iolist_size(Chunk), Depth, STime, S); {error, trimmed} = Err -> +io:format(user, "CLNT append_chunk: mid/tail ~w TRIMMED\n", [FLU]), %% TODO: nothing can be done {reply, Err, S}; {error, not_written} -> @@ -933,6 +943,7 @@ update_proj2(Count, #state{bad_proj=BadProj, proxies_dict=ProxiesDict, NewProxiesDict = ?FLU_PC:start_proxies(NewMembersDict), %% Make crash reports shorter by getting rid of 'react' history. P2 = P#projection_v1{dbg2=lists:keydelete(react, 1, Dbg2)}, +io:format(user, "CLNT PROJ: epoch ~p ~P upi ~w ~w\n", [P2#projection_v1.epoch_number, P2#projection_v1.epoch_csum, 6, P2#projection_v1.upi, P2#projection_v1.repairing]), S#state{bad_proj=undefined, proj=P2, epoch_id=EpochID, members_dict=NewMembersDict, proxies_dict=NewProxiesDict}; _P -> diff --git a/src/machi_flu1_append_server.erl b/src/machi_flu1_append_server.erl index a484410..9779d5e 100644 --- a/src/machi_flu1_append_server.erl +++ b/src/machi_flu1_append_server.erl @@ -120,6 +120,7 @@ handle_call(Else, From, S) -> handle_cast({wedge_myself, WedgeEpochId}, #state{flu_name=FluName, wedged=Wedged_p, epoch_id=OldEpochId}=S) -> if not Wedged_p andalso WedgeEpochId == OldEpochId -> +io:format(user, "FLU WEDGE 2: ~w : ~w ~P\n", [S#state.flu_name, true, OldEpochId, 6]), true = ets:insert(S#state.etstab, {epoch, {true, OldEpochId}}), %% Tell my chain manager that it might want to react to @@ -138,6 +139,7 @@ handle_cast({wedge_state_change, Boolean, {NewEpoch, _}=NewEpochId}, undefined -> -1 end, if NewEpoch >= OldEpoch -> +io:format(user, "FLU WEDGE 1: ~w : ~w ~P\n", [S#state.flu_name, Boolean, NewEpochId, 6]), true = ets:insert(S#state.etstab, {epoch, {Boolean, NewEpochId}}), {noreply, S#state{wedged=Boolean, epoch_id=NewEpochId}}; @@ -177,8 +179,13 @@ handle_append(NSInfo, Prefix, Chunk, TCSum, Opts, FluName, EpochId) -> Res = machi_flu_filename_mgr:find_or_make_filename_from_prefix( FluName, EpochId, {prefix, Prefix}, NSInfo), +io:format(user, "FLU NAME: ~w + ~p got ~p\n", [FluName, Prefix, Res]), case Res of {file, F} -> +case re:run(F, atom_to_list(FluName) ++ ",") of + nomatch -> + io:format(user, "\n\n\t\tBAAAAAAA\n\n", []), timer:sleep(50), erlang:halt(0); + _ -> ok end, case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, F}) of {ok, Pid} -> {Tag, CS} = machi_util:unmake_tagged_csum(TCSum), diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 1e504df..3de978d 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -101,7 +101,7 @@ find_or_make_filename_from_prefix(FluName, EpochId, #ns_info{}=NSInfo) when is_atom(FluName) -> N = make_filename_mgr_name(FluName), - gen_server:call(N, {find_filename, EpochId, NSInfo, Prefix}, ?TIMEOUT); + gen_server:call(N, {find_filename, FluName, EpochId, NSInfo, Prefix}, ?TIMEOUT); find_or_make_filename_from_prefix(_FluName, _EpochId, Other, Other2) -> lager:error("~p is not a valid prefix/locator ~p", [Other, Other2]), error(badarg). @@ -143,18 +143,19 @@ handle_cast(Req, State) -> %% the FLU has already validated that the caller's epoch id and the FLU's epoch id %% are the same. So we *assume* that remains the case here - that is to say, we %% are not wedged. -handle_call({find_filename, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, - epoch = EpochId, - tid = Tid }) -> +handle_call({find_filename, FluName, EpochId, NSInfo, Prefix}, _From, + S = #state{ datadir = DataDir, epoch = EpochId, tid = Tid }) -> %% Our state and the caller's epoch ids are the same. Business as usual. - File = handle_find_file(Tid, NSInfo, Prefix, DataDir), +io:format(user, "FMGR ~w LINE ~p\n", [FluName, ?LINE]), + File = handle_find_file(FluName, Tid, NSInfo, Prefix, DataDir), {reply, {file, File}, S}; -handle_call({find_filename, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> +handle_call({find_filename, FluName, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> %% If the epoch id in our state and the caller's epoch id were the same, it would've %% matched the above clause. Since we're here, we know that they are different. %% If epoch ids between our state and the caller's are different, we must increment the %% sequence number, generate a filename and then cache it. +io:format(user, "FMGR ~w LINE ~p\n", [FluName, ?LINE]), File = increment_and_cache_filename(Tid, DataDir, NSInfo, Prefix), {reply, {file, File}, S#state{epoch = EpochId}}; @@ -205,13 +206,15 @@ list_files(DataDir, Prefix) -> make_filename_mgr_name(FluName) when is_atom(FluName) -> list_to_atom(atom_to_list(FluName) ++ "_filename_mgr"). -handle_find_file(Tid, #ns_info{name=NS, locator=NSLocator}=NSInfo, Prefix, DataDir) -> +handle_find_file(FluName, Tid, #ns_info{name=NS, locator=NSLocator}=NSInfo, Prefix, DataDir) -> N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), {File, Cleanup} = case find_file(DataDir, NSInfo, Prefix, N) of [] -> +io:format(user, "HFF: 1\n", []), {find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N), false}; - [H] -> {H, true}; + [H] -> io:format(user, "HFF: 2 ~s\n", [H]),{H, true}; [Fn | _ ] = L -> +io:format(user, "HFF: 3 ~p\n", [L]), lager:debug( "Searching for a matching file to prefix ~p and sequence number ~p gave multiples: ~p", [Prefix, N, L]), @@ -231,8 +234,12 @@ find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N) -> end. generate_filename(DataDir, NS, NSLocator, Prefix, N) -> -{A,B,C} = erlang:now(), -TODO = lists:flatten(filename:basename(DataDir) ++ "," ++ io_lib:format("~w,~w,~w", [A,B,C])), + {A,B,C} = erlang:now(), + RN = case process_info(self(), registered_name) of + [] -> []; + {_,X} -> re:replace(atom_to_list(X), "_.*", "", [{return, binary}]) + end, + TODO = lists:flatten([RN, ",", io_lib:format("~w,~w,~w", [A,B,C])]), {F, _} = machi_util:make_data_filename( DataDir, NS, NSLocator, Prefix, diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 0f5f5a2..58afa9a 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -196,12 +196,15 @@ change_partition(Partition, %% Don't wait for stable chain, tick will be executed on demand %% in append oprations _ = tick(S), + ok. %% Generators num() -> - choose(2, 5). + 2. + %% TODO:put me back + %% choose(2, 5). cr_count(Num) -> Num * 3. -- 2.45.2 From 7c39af5bb7ba8b8292d98cc5092ea70ce787125b Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 10 Feb 2016 16:57:50 +0900 Subject: [PATCH 22/53] WIP: narrowing in on repair problems due to double-write errors 2 --- src/machi_flu_filename_mgr.erl | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 3de978d..5d99a8d 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -210,11 +210,9 @@ handle_find_file(FluName, Tid, #ns_info{name=NS, locator=NSLocator}=NSInfo, Pref N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), {File, Cleanup} = case find_file(DataDir, NSInfo, Prefix, N) of [] -> -io:format(user, "HFF: 1\n", []), {find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N), false}; - [H] -> io:format(user, "HFF: 2 ~s\n", [H]),{H, true}; + [H] -> {H, true}; [Fn | _ ] = L -> -io:format(user, "HFF: 3 ~p\n", [L]), lager:debug( "Searching for a matching file to prefix ~p and sequence number ~p gave multiples: ~p", [Prefix, N, L]), -- 2.45.2 From ecfad4726bb285310820470d8309cc05134a6e1d Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 10 Feb 2016 18:17:15 +0900 Subject: [PATCH 23/53] Fix machi_flu_filename_mgr to avoid double-write errors during network partitions --- src/machi_flu_filename_mgr.erl | 45 ++++++++-------------------------- 1 file changed, 10 insertions(+), 35 deletions(-) diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 293fdc3..36d830b 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -100,7 +100,7 @@ find_or_make_filename_from_prefix(FluName, EpochId, {coc, _CoC_Ns, _CoC_Loc}=CoC_NL) when is_atom(FluName) -> N = make_filename_mgr_name(FluName), - gen_server:call(N, {find_filename, EpochId, CoC_NL, Prefix}, ?TIMEOUT); + gen_server:call(N, {find_filename, FluName, EpochId, CoC_NL, Prefix}, ?TIMEOUT); find_or_make_filename_from_prefix(_FluName, _EpochId, Other, Other2) -> lager:error("~p is not a valid prefix/CoC ~p", [Other, Other2]), error(badarg). @@ -142,14 +142,14 @@ handle_cast(Req, State) -> %% the FLU has already validated that the caller's epoch id and the FLU's epoch id %% are the same. So we *assume* that remains the case here - that is to say, we %% are not wedged. -handle_call({find_filename, EpochId, CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir, +handle_call({find_filename, FluName, EpochId, CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir, epoch = EpochId, tid = Tid }) -> %% Our state and the caller's epoch ids are the same. Business as usual. - File = handle_find_file(Tid, CoC_NL, Prefix, DataDir), + File = handle_find_file(FluName, Tid, CoC_NL, Prefix, DataDir), {reply, {file, File}, S}; -handle_call({find_filename, EpochId, CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> +handle_call({find_filename, _FluName, EpochId, CoC_NL, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> %% If the epoch id in our state and the caller's epoch id were the same, it would've %% matched the above clause. Since we're here, we know that they are different. %% If epoch ids between our state and the caller's are different, we must increment the @@ -191,12 +191,6 @@ generate_uuid_v4_str() -> io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b", [A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]). -find_file(DataDir, {coc,CoC_Namespace,CoC_Locator}=_CoC_NL, Prefix, N) -> - {_Filename, Path} = machi_util:make_data_filename(DataDir, - CoC_Namespace,CoC_Locator, - Prefix, "*", N), - filelib:wildcard(Path). - list_files(DataDir, Prefix) -> {F_bin, Path} = machi_util:make_data_filename(DataDir, "*^" ++ Prefix ++ "^*"), filelib:wildcard(binary_to_list(F_bin), filename:dirname(Path)). @@ -204,26 +198,12 @@ list_files(DataDir, Prefix) -> make_filename_mgr_name(FluName) when is_atom(FluName) -> list_to_atom(atom_to_list(FluName) ++ "_filename_mgr"). -handle_find_file(Tid, {coc,CoC_Namespace,CoC_Locator}=CoC_NL, Prefix, DataDir) -> - N = machi_util:read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), - {File, Cleanup} = case find_file(DataDir, CoC_NL, Prefix, N) of - [] -> - {find_or_make_filename(Tid, DataDir, CoC_Namespace, CoC_Locator, Prefix, N), false}; - [H] -> {H, true}; - [Fn | _ ] = L -> - lager:debug( - "Searching for a matching file to prefix ~p and sequence number ~p gave multiples: ~p", - [Prefix, N, L]), - {Fn, true} - end, - maybe_cleanup(Tid, {CoC_Namespace, CoC_Locator, Prefix, N}, Cleanup), - filename:basename(File). - -find_or_make_filename(Tid, DataDir, CoC_Namespace, CoC_Locator, Prefix, N) -> - case ets:lookup(Tid, {CoC_Namespace, CoC_Locator, Prefix, N}) of +handle_find_file(_FluName, Tid, {coc,CoC_Namespace,CoC_Locator}, Prefix, DataDir) -> + case ets:lookup(Tid, {CoC_Namespace, CoC_Locator, Prefix}) of [] -> + N = machi_util:read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), F = generate_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, N), - true = ets:insert_new(Tid, {{CoC_Namespace, CoC_Locator, Prefix, N}, F}), + true = ets:insert(Tid, {{CoC_Namespace, CoC_Locator, Prefix}, F}), F; [{_Key, File}] -> File @@ -237,17 +217,12 @@ generate_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, N) -> N), binary_to_list(F). -maybe_cleanup(_Tid, _Key, false) -> - ok; -maybe_cleanup(Tid, Key, true) -> - true = ets:delete(Tid, Key). - increment_and_cache_filename(Tid, DataDir, {coc,CoC_Namespace,CoC_Locator}, Prefix) -> ok = machi_util:increment_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), N = machi_util:read_max_filenum(DataDir, CoC_Namespace, CoC_Locator, Prefix), F = generate_filename(DataDir, CoC_Namespace, CoC_Locator, Prefix, N), - true = ets:insert_new(Tid, {{CoC_Namespace, CoC_Locator, Prefix, N}, F}), - filename:basename(F). + true = ets:insert(Tid, {{CoC_Namespace, CoC_Locator, Prefix}, F}), + F. -- 2.45.2 From 943e23e050753d22c69c3592deb9a8f9cb85a146 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 10 Feb 2016 19:35:52 +0900 Subject: [PATCH 24/53] Hooray, all eunit tests including EQC pass! --- src/machi_chain_manager1.erl | 11 +++---- src/machi_chain_repair.erl | 22 ++++---------- src/machi_cr_client.erl | 21 +++---------- src/machi_flu1_append_server.erl | 7 ----- src/machi_flu_filename_mgr.erl | 47 ++++++------------------------ src/machi_pb_high_client.erl | 4 +-- src/machi_pb_translate.erl | 4 +-- test/machi_ap_repair_eqc.erl | 8 ++--- test/machi_cr_client_test.erl | 14 ++++----- test/machi_pb_high_client_test.erl | 4 +-- 10 files changed, 37 insertions(+), 105 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index b124cbd..bdc142d 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -1937,8 +1937,7 @@ react_to_env_C103(#projection_v1{epoch_number=_Epoch_newprop} = _P_newprop, ?REACT({c103, ?LINE, [{current_epoch, P_current#projection_v1.epoch_number}, {none_projection_epoch, P_none#projection_v1.epoch_number}]}), - io:format(user, "SET add_admin_down(~w) at ~w TODO current_epoch ~w none_proj_epoch ~w =====================================\n", [MyName, time(), P_current#projection_v1.epoch_number, P_none#projection_v1.epoch_number]), - %% io:format(user, "SET add_admin_down(~w) at ~w =====================================\n", [MyName, time()]), + io:format(user, "SET add_admin_down(~w) at ~w current_epoch ~w none_proj_epoch ~w =====================================\n", [MyName, time(), P_current#projection_v1.epoch_number, P_none#projection_v1.epoch_number]), machi_fitness:add_admin_down(S#ch_mgr.fitness_svr, MyName, []), timer:sleep(5*1000), io:format(user, "SET delete_admin_down(~w) at ~w =====================================\n", [MyName, time()]), @@ -2968,8 +2967,7 @@ zerf_find_last_annotated(FLU, MajoritySize, S) -> end. perhaps_verbose_c111(P_latest2, S) -> - case true of - %%TODO put me back: case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of + case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of true -> Dbg2X = lists:keydelete(react, 1, P_latest2#projection_v1.dbg2) ++ @@ -2977,9 +2975,8 @@ perhaps_verbose_c111(P_latest2, S) -> P_latest2x = P_latest2#projection_v1{dbg2=Dbg2X}, % limit verbose len. Last2 = get(last_verbose), Summ2 = machi_projection:make_summary(P_latest2x), - %% if P_latest2#projection_v1.upi == [], - %% (S#ch_mgr.proj)#projection_v1.upi /= [] -> - if true -> + if P_latest2#projection_v1.upi == [], + (S#ch_mgr.proj)#projection_v1.upi /= [] -> <> = P_latest2#projection_v1.epoch_csum, io:format(user, "~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), (S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]); diff --git a/src/machi_chain_repair.erl b/src/machi_chain_repair.erl index f249268..052fb1c 100644 --- a/src/machi_chain_repair.erl +++ b/src/machi_chain_repair.erl @@ -105,8 +105,6 @@ repair(ap_mode=ConsistencyMode, Src, Repairing, UPI, MembersDict, ETS, Opts) -> RepairMode = proplists:get_value(repair_mode, Opts, repair), Verb = proplists:get_value(verbose, Opts, false), RepairId = proplists:get_value(repair_id, Opts, id1), -erlang:display(wtf), - %% io:format(user, "TODO: ~p\n", [{error, {What, Why, Stack}}]), Res = try _ = [begin {ok, Proxy} = machi_proxy_flu1_client:start_link(P), @@ -129,7 +127,6 @@ erlang:display(wtf), {ok, EpochID} = machi_proxy_flu1_client:get_epoch_id( SrcProxy, ?SHORT_TIMEOUT), %% ?VERB("Make repair directives: "), -erlang:display(yo1), Ds = [{File, make_repair_directives( ConsistencyMode, RepairMode, File, Size, EpochID, @@ -149,21 +146,16 @@ erlang:display(yo1), end || FLU <- OurFLUs], %% ?VERB("Execute repair directives: "), -erlang:display(yo1), ok = execute_repair_directives(ConsistencyMode, Ds, Src, EpochID, Verb, OurFLUs, ProxiesDict, ETS), -erlang:display(yo2), %% ?VERB(" done\n"), lager:info("Repair ~w repair directives finished\n", [RepairId]), ok catch What:Why -> -io:format(user, "yo3 ~p ~p\n", [What,Why]), Stack = erlang:get_stacktrace(), -io:format(user, "yo3 ~p\n", [Stack]), {error, {What, Why, Stack}} after -erlang:display(yo4), [(catch machi_proxy_flu1_client:quit(Pid)) || Pid <- orddict:to_list(get(proxies_dict))] end, @@ -244,7 +236,6 @@ make_repair_directives(ConsistencyMode, RepairMode, File, Size, _EpochID, make_repair_directives2(C2, ConsistencyMode, RepairMode, File, Verb, Src, FLUs, ProxiesDict, ETS) -> - ?VERB(".1"), make_repair_directives3(C2, ConsistencyMode, RepairMode, File, Verb, Src, FLUs, ProxiesDict, ETS, []). @@ -286,7 +277,6 @@ make_repair_directives3([{Offset, Size, CSum, _FLU}=A|Rest0], end || {__Offset, __Size, __CSum, FLU} <- As], exit({todo_repair_sanity_check, ?LINE, File, Offset, {as,As}, {qq,QQ}}) - %% exit({todo_repair_sanity_check, ?LINE, File, Offset, As}) end, %% List construction guarantees us that there's at least one ?MAX_OFFSET %% item remains. Sort order + our "taking" of all exact Offset+Size @@ -339,17 +329,17 @@ execute_repair_directives(ap_mode=_ConsistencyMode, Ds, _Src, EpochID, Verb, {ProxiesDict, EpochID, Verb, ETS}, Ds), ok. -execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> +execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, _Verb, ETS}=Acc) -> EtsKeys = [{in_files, t_in_files}, {in_chunks, t_in_chunks}, {in_bytes, t_in_bytes}, {out_files, t_out_files}, {out_chunks, t_out_chunks}, {out_bytes, t_out_bytes}], [ets:insert(ETS, {L_K, 0}) || {L_K, _T_K} <- EtsKeys], F = fun({copy, {Offset, Size, TaggedCSum, MySrc}, MyDsts}, Acc2) -> SrcP = orddict:fetch(MySrc, ProxiesDict), - case ets:lookup_element(ETS, in_chunks, 2) rem 100 of - 0 -> ?VERB(".2", []); - _ -> ok - end, + %% case ets:lookup_element(ETS, in_chunks, 2) rem 100 of + %% 0 -> ?VERB(".2", []); + %% _ -> ok + %% end, _T1 = os:timestamp(), %% TODO: support case multiple written or trimmed chunks returned NSInfo = undefined, @@ -391,9 +381,7 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) -> Acc2 end end, -erlang:display({yo,?LINE}), ok = lists:foldl(F, ok, Cmds), -erlang:display({yo,?LINE}), %% Copy this file's stats to the total counts. _ = [ets:update_counter(ETS, T_K, ets:lookup_element(ETS, L_K, 2)) || {L_K, T_K} <- EtsKeys], diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index 59a8c72..b36c6ca 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -299,20 +299,16 @@ do_append_head3(NSInfo, Prefix, case ?FLU_PC:append_chunk(Proxy, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, ?TIMEOUT) of {ok, {Offset, _Size, File}=_X} -> -io:format(user, "CLNT append_chunk: head ~w ok\n ~p\n hd ~p rest ~p epoch ~P\n", [HeadFLU, _X, HeadFLU, RestFLUs, EpochID, 8]), do_wr_app_midtail(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, [HeadFLU], 0, STime, TO, append, S); {error, bad_checksum}=BadCS -> -io:format(user, "CLNT append_chunk: head ~w BAD CS\n", [HeadFLU]), {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> -io:format(user, "CLNT append_chunk: head ~w error ~p\n", [HeadFLU, Retry]), do_append_head(NSInfo, Prefix, Chunk, CSum, Opts, Depth, STime, TO, S); {error, written} -> -io:format(user, "CLNT append_chunk: head ~w Written\n", [HeadFLU]), %% Implicit sequencing + this error = we don't know where this %% written block is. But we lost a race. Repeat, with a new %% sequencer assignment. @@ -391,32 +387,26 @@ do_wr_app_midtail2([FLU|RestFLUs]=FLUs, NSInfo, CSum, Opts, Ws, Depth, STime, TO, MyOp, #state{epoch_id=EpochID, proxies_dict=PD}=S) -> Proxy = orddict:fetch(FLU, PD), -io:format(user, "CLNT append_chunk: mid/tail ~w\n", [FLU]), case ?FLU_PC:write_chunk(Proxy, NSInfo, EpochID, File, Offset, Chunk, CSum, ?TIMEOUT) of ok -> -io:format(user, "CLNT append_chunk: mid/tail ~w ok\n", [FLU]), do_wr_app_midtail2(RestFLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, [FLU|Ws], Depth, STime, TO, MyOp, S); {error, bad_checksum}=BadCS -> -io:format(user, "CLNT append_chunk: mid/tail ~w BAD CS\n", [FLU]), %% TODO: alternate strategy? {reply, BadCS, S}; {error, Retry} when Retry == partition; Retry == bad_epoch; Retry == wedged -> -io:format(user, "CLNT append_chunk: mid/tail ~w error ~p\n", [FLU, Retry]), do_wr_app_midtail(FLUs, NSInfo, Prefix, File, Offset, Chunk, CSum, Opts, Ws, Depth, STime, TO, MyOp, S); {error, written} -> -io:format(user, "CLNT append_chunk: mid/tail ~w WRITTEN\n", [FLU]), %% We know what the chunk ought to be, so jump to the %% middle of read-repair. Resume = {append, Offset, iolist_size(Chunk), File}, do_repair_chunk(FLUs, Resume, Chunk, CSum, [], NSInfo, File, Offset, iolist_size(Chunk), Depth, STime, S); {error, trimmed} = Err -> -io:format(user, "CLNT append_chunk: mid/tail ~w TRIMMED\n", [FLU]), %% TODO: nothing can be done {reply, Err, S}; {error, not_written} -> @@ -735,10 +725,8 @@ read_repair2(ap_mode=ConsistencyMode, {ok, {Chunks, _Trimmed}, GotItFrom} when is_list(Chunks) -> %% TODO: Repair trimmed chunks ToRepair = mutation_flus(P) -- [GotItFrom], - {Reply0, S1} = do_repair_chunks(Chunks, ToRepair, ReturnMode, [GotItFrom], + {reply, Reply, S1} = do_repair_chunks(Chunks, ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S, {ok, Chunks}), - {ok, Chunks} = Reply0, - Reply = {ok, {Chunks, _Trimmed}}, {reply, Reply, S1}; {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? @@ -761,7 +749,7 @@ do_repair_chunks([], _, _, _, _, _, _, _, S, Reply) -> {Reply, S}; do_repair_chunks([{_, Offset, Chunk, CSum}|T], ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S, Reply) -> - true = _TODO_fixme = not is_atom(CSum), + true = not is_atom(CSum), Size = iolist_size(Chunk), case do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, [GotItFrom], NSInfo, File, Offset, Size, Depth, STime, S) of @@ -791,12 +779,12 @@ do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, Repaired, NSInfo, File, Offse end end. -do_repair_chunk2([], ReturnMode, Chunk, _CSum, _Repaired, _NSInfo, File, Offset, +do_repair_chunk2([], ReturnMode, Chunk, CSum, _Repaired, _NSInfo, File, Offset, _IgnoreSize, _Depth, _STime, S) -> %% TODO: add stats for # of repairs, length(_Repaired)-1, etc etc? case ReturnMode of read -> - {reply, {ok, {[Chunk], []}}, S}; + {reply, {ok, {[{File, Offset, Chunk, CSum}], []}}, S}; {append, Offset, Size, File} -> {reply, {ok, {[{Offset, Size, File}], []}}, S} end; @@ -943,7 +931,6 @@ update_proj2(Count, #state{bad_proj=BadProj, proxies_dict=ProxiesDict, NewProxiesDict = ?FLU_PC:start_proxies(NewMembersDict), %% Make crash reports shorter by getting rid of 'react' history. P2 = P#projection_v1{dbg2=lists:keydelete(react, 1, Dbg2)}, -io:format(user, "CLNT PROJ: epoch ~p ~P upi ~w ~w\n", [P2#projection_v1.epoch_number, P2#projection_v1.epoch_csum, 6, P2#projection_v1.upi, P2#projection_v1.repairing]), S#state{bad_proj=undefined, proj=P2, epoch_id=EpochID, members_dict=NewMembersDict, proxies_dict=NewProxiesDict}; _P -> diff --git a/src/machi_flu1_append_server.erl b/src/machi_flu1_append_server.erl index 9779d5e..a484410 100644 --- a/src/machi_flu1_append_server.erl +++ b/src/machi_flu1_append_server.erl @@ -120,7 +120,6 @@ handle_call(Else, From, S) -> handle_cast({wedge_myself, WedgeEpochId}, #state{flu_name=FluName, wedged=Wedged_p, epoch_id=OldEpochId}=S) -> if not Wedged_p andalso WedgeEpochId == OldEpochId -> -io:format(user, "FLU WEDGE 2: ~w : ~w ~P\n", [S#state.flu_name, true, OldEpochId, 6]), true = ets:insert(S#state.etstab, {epoch, {true, OldEpochId}}), %% Tell my chain manager that it might want to react to @@ -139,7 +138,6 @@ handle_cast({wedge_state_change, Boolean, {NewEpoch, _}=NewEpochId}, undefined -> -1 end, if NewEpoch >= OldEpoch -> -io:format(user, "FLU WEDGE 1: ~w : ~w ~P\n", [S#state.flu_name, Boolean, NewEpochId, 6]), true = ets:insert(S#state.etstab, {epoch, {Boolean, NewEpochId}}), {noreply, S#state{wedged=Boolean, epoch_id=NewEpochId}}; @@ -179,13 +177,8 @@ handle_append(NSInfo, Prefix, Chunk, TCSum, Opts, FluName, EpochId) -> Res = machi_flu_filename_mgr:find_or_make_filename_from_prefix( FluName, EpochId, {prefix, Prefix}, NSInfo), -io:format(user, "FLU NAME: ~w + ~p got ~p\n", [FluName, Prefix, Res]), case Res of {file, F} -> -case re:run(F, atom_to_list(FluName) ++ ",") of - nomatch -> - io:format(user, "\n\n\t\tBAAAAAAA\n\n", []), timer:sleep(50), erlang:halt(0); - _ -> ok end, case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, F}) of {ok, Pid} -> {Tag, CS} = machi_util:unmake_tagged_csum(TCSum), diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index 5d99a8d..ec66031 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -146,16 +146,14 @@ handle_cast(Req, State) -> handle_call({find_filename, FluName, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, epoch = EpochId, tid = Tid }) -> %% Our state and the caller's epoch ids are the same. Business as usual. -io:format(user, "FMGR ~w LINE ~p\n", [FluName, ?LINE]), File = handle_find_file(FluName, Tid, NSInfo, Prefix, DataDir), {reply, {file, File}, S}; -handle_call({find_filename, FluName, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> +handle_call({find_filename, _FluName, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> %% If the epoch id in our state and the caller's epoch id were the same, it would've %% matched the above clause. Since we're here, we know that they are different. %% If epoch ids between our state and the caller's are different, we must increment the %% sequence number, generate a filename and then cache it. -io:format(user, "FMGR ~w LINE ~p\n", [FluName, ?LINE]), File = increment_and_cache_filename(Tid, DataDir, NSInfo, Prefix), {reply, {file, File}, S#state{epoch = EpochId}}; @@ -206,58 +204,31 @@ list_files(DataDir, Prefix) -> make_filename_mgr_name(FluName) when is_atom(FluName) -> list_to_atom(atom_to_list(FluName) ++ "_filename_mgr"). -handle_find_file(FluName, Tid, #ns_info{name=NS, locator=NSLocator}=NSInfo, Prefix, DataDir) -> - N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), - {File, Cleanup} = case find_file(DataDir, NSInfo, Prefix, N) of - [] -> - {find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N), false}; - [H] -> {H, true}; - [Fn | _ ] = L -> - lager:debug( - "Searching for a matching file to prefix ~p and sequence number ~p gave multiples: ~p", - [Prefix, N, L]), - {Fn, true} - end, - maybe_cleanup(Tid, {NS, NSLocator, Prefix, N}, Cleanup), - filename:basename(File). - -find_or_make_filename(Tid, DataDir, NS, NSLocator, Prefix, N) -> - case ets:lookup(Tid, {NS, NSLocator, Prefix, N}) of +handle_find_file(_FluName, Tid, #ns_info{name=NS, locator=NSLocator}, Prefix, DataDir) -> + case ets:lookup(Tid, {NS, NSLocator, Prefix}) of [] -> + N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), F = generate_filename(DataDir, NS, NSLocator, Prefix, N), - true = ets:insert_new(Tid, {{NS, NSLocator, Prefix, N}, F}), + true = ets:insert(Tid, {{NS, NSLocator, Prefix}, F}), F; [{_Key, File}] -> File end. generate_filename(DataDir, NS, NSLocator, Prefix, N) -> - {A,B,C} = erlang:now(), - RN = case process_info(self(), registered_name) of - [] -> []; - {_,X} -> re:replace(atom_to_list(X), "_.*", "", [{return, binary}]) - end, - TODO = lists:flatten([RN, ",", io_lib:format("~w,~w,~w", [A,B,C])]), - {F, _} = machi_util:make_data_filename( + {F, _Q} = machi_util:make_data_filename( DataDir, NS, NSLocator, Prefix, -TODO, - %% TODO put me back!! - %% generate_uuid_v4_str(), + generate_uuid_v4_str(), N), binary_to_list(F). -maybe_cleanup(_Tid, _Key, false) -> - ok; -maybe_cleanup(Tid, Key, true) -> - true = ets:delete(Tid, Key). - increment_and_cache_filename(Tid, DataDir, #ns_info{name=NS,locator=NSLocator}, Prefix) -> ok = machi_util:increment_max_filenum(DataDir, NS, NSLocator, Prefix), N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), F = generate_filename(DataDir, NS, NSLocator, Prefix, N), - true = ets:insert_new(Tid, {{NS, NSLocator, Prefix, N}, F}), - filename:basename(F). + true = ets:insert(Tid, {{NS, NSLocator, Prefix}, F}), + F. diff --git a/src/machi_pb_high_client.erl b/src/machi_pb_high_client.erl index 23f01b8..f67479e 100644 --- a/src/machi_pb_high_client.erl +++ b/src/machi_pb_high_client.erl @@ -501,12 +501,12 @@ convert_read_chunk_resp(#mpb_readchunkresp{status='OK', chunks=PB_Chunks, trimme csum=#mpb_chunkcsum{type=T, csum=Ck}}) -> %% TODO: cleanup export Csum = <<(machi_pb_translate:conv_to_csum_tag(T)):8, Ck/binary>>, - {File, Offset, Chunk, Csum} + {list_to_binary(File), Offset, Chunk, Csum} end, PB_Chunks), Trimmed = lists:map(fun(#mpb_chunkpos{file_name=File, offset=Offset, chunk_size=Size}) -> - {File, Offset, Size} + {list_to_binary(File), Offset, Size} end, PB_Trimmed), {ok, {Chunks, Trimmed}}; convert_read_chunk_resp(#mpb_readchunkresp{status=Status}) -> diff --git a/src/machi_pb_translate.erl b/src/machi_pb_translate.erl index 707a339..1fd5f8b 100644 --- a/src/machi_pb_translate.erl +++ b/src/machi_pb_translate.erl @@ -274,12 +274,12 @@ from_pb_response(#mpb_ll_response{ chunk=Bytes, csum=#mpb_chunkcsum{type=T,csum=Ck}}) -> Csum = <<(conv_to_csum_tag(T)):8, Ck/binary>>, - {File, Offset, Bytes, Csum} + {list_to_binary(File), Offset, Bytes, Csum} end, PB_Chunks), Trimmed = lists:map(fun(#mpb_chunkpos{file_name=File, offset=Offset, chunk_size=Size}) -> - {File, Offset, Size} + {list_to_binary(File), Offset, Size} end, PB_Trimmed), {ReqID, {ok, {Chunks, Trimmed}}}; _ -> diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 58afa9a..6265505 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -121,9 +121,7 @@ append(CRIndex, Bin, #state{verbose=V}=S) -> NSInfo = #ns_info{}, NoCSum = <<>>, Opts1 = #append_opts{}, -io:format(user, "append_chunk ~p ~P ->\n", [Prefix, Bin, 6]), Res = (catch machi_cr_client:append_chunk(C, NSInfo, Prefix, Bin, NoCSum, Opts1, sec(1))), -io:format(user, "append_chunk ~p ~P ->\n ~p\n", [Prefix, Bin, 6, Res]), case Res of {ok, {_Off, Len, _FileName}=Key} -> case ets:insert_new(?WRITTEN_TAB, {Key, Bin}) of @@ -190,7 +188,6 @@ change_partition(Partition, [] -> ?V("## Turn OFF partition: ~w~n", [Partition]); _ -> ?V("## Turn ON partition: ~w~n", [Partition]) end || Verbose], - io:format(user, "partition ~p\n", [Partition]), machi_partition_simulator:always_these_partitions(Partition), _ = machi_partition_simulator:get(FLUNames), %% Don't wait for stable chain, tick will be executed on demand @@ -459,15 +456,14 @@ confirm_written(C) -> assert_chunk(C, {Off, Len, FileName}=Key, Bin) -> %% TODO: This probably a bug, read_chunk respnds with filename of `string()' type - FileNameStr = binary_to_list(FileName), %% TODO : Use CSum instead of binary (after disuccsion about CSum is calmed down?) NSInfo = undefined, case (catch machi_cr_client:read_chunk(C, NSInfo, FileName, Off, Len, undefined, sec(3))) of - {ok, {[{FileNameStr, Off, Bin, _}], []}} -> + {ok, {[{FileName, Off, Bin, _}], []}} -> ok; {ok, Got} -> ?V("read_chunk got different binary for Key=~p~n", [Key]), - ?V(" Expected: ~p~n", [{[{FileNameStr, Off, Bin, <<"CSum-NYI">>}], []}]), + ?V(" Expected: ~p~n", [{[{FileName, Off, Bin, <<"CSum-NYI">>}], []}]), ?V(" Got: ~p~n", [Got]), {error, different_binary}; {error, Reason} -> diff --git a/test/machi_cr_client_test.erl b/test/machi_cr_client_test.erl index 7e4d31c..29e1d13 100644 --- a/test/machi_cr_client_test.erl +++ b/test/machi_cr_client_test.erl @@ -119,7 +119,7 @@ smoke_test2() -> machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), {ok, {Off1,Size1,File1}} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), - BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, "foo")}, {error, bad_checksum} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = @@ -140,10 +140,10 @@ smoke_test2() -> File1, FooOff1, Size1, undefined) || X <- [0,1,2] ], ok = machi_flu1_client:write_chunk(Host, PortBase+0, NSInfo, EpochID, File1, FooOff1, Chunk1, NoCSum), - {ok, {[{_, FooOff1, Chunk1, _}], []}} = + {ok, {[{File1, FooOff1, Chunk1, _}=_YY], []}} = machi_flu1_client:read_chunk(Host, PortBase+0, NSInfo, EpochID, File1, FooOff1, Size1, undefined), - {ok, {[{_, FooOff1, Chunk1, _}], []}} = + {ok, {[{File1, FooOff1, Chunk1, _}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff1, Size1, undefined), [?assertMatch({X,{ok, {[{_, FooOff1, Chunk1, _}], []}}}, {X,machi_flu1_client:read_chunk( @@ -157,9 +157,9 @@ smoke_test2() -> Size2 = size(Chunk2), ok = machi_flu1_client:write_chunk(Host, PortBase+1, NSInfo, EpochID, File1, FooOff2, Chunk2, NoCSum), - {ok, {[{_, FooOff2, Chunk2, _}], []}} = + {ok, {[{File1, FooOff2, Chunk2, _}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, undefined), - [{X,{ok, {[{_, FooOff2, Chunk2, _}], []}}} = + [{X,{ok, {[{File1, FooOff2, Chunk2, _}], []}}} = {X,machi_flu1_client:read_chunk( Host, PortBase+X, NSInfo, EpochID, File1, FooOff2, Size2, undefined)} || X <- [0,1,2] ], @@ -167,7 +167,7 @@ smoke_test2() -> %% Misc API smoke & minor regression checks {error, bad_arg} = machi_cr_client:read_chunk(C1, NSInfo, <<"no">>, 999999999, 1, undefined), - {ok, {[{_,Off1,Chunk1,_}, {_,FooOff1,Chunk1,_}, {_,FooOff2,Chunk2,_}], + {ok, {[{File1,Off1,Chunk1,_}, {File1,FooOff1,Chunk1,_}, {File1,FooOff2,Chunk2,_}], []}} = machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, 88888888, undefined), %% Checksum list return value is a primitive binary(). @@ -242,7 +242,7 @@ witness_smoke_test2() -> Chunk1, NoCSum), {ok, {Off1,Size1,File1}} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), - BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:sha("foo")}, + BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, "foo")}, {error, bad_checksum} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), {ok, {[{_, Off1, Chunk1, _}], []}} = diff --git a/test/machi_pb_high_client_test.erl b/test/machi_pb_high_client_test.erl index 468b183..68df0c9 100644 --- a/test/machi_pb_high_client_test.erl +++ b/test/machi_pb_high_client_test.erl @@ -78,7 +78,7 @@ smoke_test2() -> {iolist_to_binary(Chunk2), File2, Off2, Size2}, {iolist_to_binary(Chunk3), File3, Off3, Size3}], [begin - File = binary_to_list(Fl), + File = Fl, ?assertMatch({ok, {[{File, Off, Ch, _}], []}}, ?C:read_chunk(Clnt, Fl, Off, Sz, undefined)) end || {Ch, Fl, Off, Sz} <- Reads], @@ -105,7 +105,7 @@ smoke_test2() -> [begin {ok, {[], Trimmed}} = ?C:read_chunk(Clnt, Fl, Off, Sz, #read_opts{needs_trimmed=true}), - Filename = binary_to_list(Fl), + Filename = Fl, ?assertEqual([{Filename, Off, Sz}], Trimmed) end || {_Ch, Fl, Off, Sz} <- Reads], -- 2.45.2 From b246ebc3767e67cb00131892c3fee099e68b0f4a Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Sun, 14 Feb 2016 15:59:50 +0900 Subject: [PATCH 25/53] Rearrange unfinished NS locator reminder spam in machi_flu1_net_server.erl --- src/machi_flu1_net_server.erl | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/src/machi_flu1_net_server.erl b/src/machi_flu1_net_server.erl index 7a9f549..ed3d980 100644 --- a/src/machi_flu1_net_server.erl +++ b/src/machi_flu1_net_server.erl @@ -579,29 +579,29 @@ do_pb_hl_request2({high_echo, Msg}, S) -> {Msg, S}; do_pb_hl_request2({high_auth, _User, _Pass}, S) -> {-77, S}; -do_pb_hl_request2({high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}, +do_pb_hl_request2({high_append_chunk=Op, NS, Prefix, Chunk, TaggedCSum, Opts}, #state{high_clnt=Clnt}=S) -> NSInfo = #ns_info{name=NS}, % TODO populate other fields - io:format(user, "TODO fix broken append_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + todo_perhaps_remind_ns_locator_not_chosen(Op), Res = machi_cr_client:append_chunk(Clnt, NSInfo, Prefix, Chunk, TaggedCSum, Opts), {Res, S}; -do_pb_hl_request2({high_write_chunk, File, Offset, Chunk, CSum}, +do_pb_hl_request2({high_write_chunk=Op, File, Offset, Chunk, CSum}, #state{high_clnt=Clnt}=S) -> NSInfo = undefined, - io:format(user, "TODO fix broken write_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + todo_perhaps_remind_ns_locator_not_chosen(Op), Res = machi_cr_client:write_chunk(Clnt, NSInfo, File, Offset, Chunk, CSum), {Res, S}; -do_pb_hl_request2({high_read_chunk, File, Offset, Size, Opts}, +do_pb_hl_request2({high_read_chunk=Op, File, Offset, Size, Opts}, #state{high_clnt=Clnt}=S) -> NSInfo = undefined, - io:format(user, "TODO fix broken read_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + todo_perhaps_remind_ns_locator_not_chosen(Op), Res = machi_cr_client:read_chunk(Clnt, NSInfo, File, Offset, Size, Opts), {Res, S}; -do_pb_hl_request2({high_trim_chunk, File, Offset, Size}, +do_pb_hl_request2({high_trim_chunk=Op, File, Offset, Size}, #state{high_clnt=Clnt}=S) -> NSInfo = undefined, - io:format(user, "TODO fix broken trim_chunk mod ~s line ~w\n", [?MODULE, ?LINE]), + todo_perhaps_remind_ns_locator_not_chosen(Op), Res = machi_cr_client:trim_chunk(Clnt, NSInfo, File, Offset, Size), {Res, S}; do_pb_hl_request2({high_checksum_list, File}, #state{high_clnt=Clnt}=S) -> @@ -620,3 +620,15 @@ make_high_clnt(#state{high_clnt=undefined}=S) -> S#state{high_clnt=Clnt}; make_high_clnt(S) -> S. + +todo_perhaps_remind_ns_locator_not_chosen(Op) -> + Key = {?MODULE, Op}, + case get(Key) of + undefined -> + io:format(user, "TODO op ~w is using default locator value\n", + [Op]), + put(Key, true); + _ -> + ok + end. + -- 2.45.2 From 12ebf4390d251ba99e837784934f205cd503a75a Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Sun, 14 Feb 2016 16:00:11 +0900 Subject: [PATCH 26/53] Undo testing restriction in test/machi_ap_repair_eqc.erl --- test/machi_ap_repair_eqc.erl | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/test/machi_ap_repair_eqc.erl b/test/machi_ap_repair_eqc.erl index 6265505..55bc082 100644 --- a/test/machi_ap_repair_eqc.erl +++ b/test/machi_ap_repair_eqc.erl @@ -199,9 +199,7 @@ change_partition(Partition, %% Generators num() -> - 2. - %% TODO:put me back - %% choose(2, 5). + choose(2, 5). cr_count(Num) -> Num * 3. -- 2.45.2 From 9d4483ae68fc8791797d1ff85857cc3c20992a3a Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Sun, 14 Feb 2016 16:10:33 +0900 Subject: [PATCH 27/53] Minor edits to doc/cluster/name-game-sketch.org --- doc/cluster/name-game-sketch.org | 128 +++++++++++++++++-------------- 1 file changed, 70 insertions(+), 58 deletions(-) diff --git a/doc/cluster/name-game-sketch.org b/doc/cluster/name-game-sketch.org index 83dd9a8..21d2bd6 100644 --- a/doc/cluster/name-game-sketch.org +++ b/doc/cluster/name-game-sketch.org @@ -41,13 +41,14 @@ We wish to provide partitioned/distributed file storage across all ~n~ chains. We call the entire collection of ~n~ Machi chains a "cluster". -We may wish to have several types of Machi clusters, e.g. +We may wish to have several types of Machi clusters. For example: -+ Chain length of 3 for normal data, longer for - cannot-afford-data-loss files, -+ Chain length of 1 for don't-care-if-it-gets-lost, - store-stuff-very-very-cheaply files. -+ Chain length of 7 for critical, unreplaceable files. ++ Chain length of 1 for "don't care if it gets lost, + store stuff very very cheaply" data. ++ Chain length of 2 for normal data. + + Equivalent to quorum replication's reliability with 3 copies. ++ Chain length of 7 for critical, unreplaceable data. + + Equivalent to quorum replication's reliability with 15 copies. Each of these types of chains will have a name ~N~ in the namespace. The role of the cluster namespace will be demonstrated in @@ -60,14 +61,31 @@ inside of a cluster is completely unaware of the cluster layer. ** The reader is familiar with the random slicing technique -I'd done something very-very-nearly-identical for the Hibari database +I'd done something very-very-nearly-like-this for the Hibari database 6 years ago. But the Hibari technique was based on stuff I did at -Sendmail, Inc, so it felt old news to me. {shrug} +Sendmail, Inc, in 2000, so this technique feels like old news to me. +{shrug} -The Hibari documentation has a brief photo illustration of how random -slicing works, see [[http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration][Hibari Sysadmin Guide, chain migration]] +The following section provides an illustrated example. +Very quickly, the random slicing algorithm is: -For a comprehensive description, please see these two papers: +- Hash a string onto the unit interval [0.0, 1.0) +- Calculate h(unit interval point, Map) -> bin, where ~Map~ divides + the unit interval into bins (or partitions or shards). + +Machi's adaptation is in step 1: we do not hash any strings. Instead, we +simply choose a number on the unit interval. This number is called +the "cluster locator number". + +As described later in this doc, Machi file names are structured into +several components. One component of the file name contains the cluster +locator number; we use the number as-is for step 2 above. + +*** For more information about Random Slicing + +For a comprehensive description of random slicing, please see the +first two papers. For a quicker summary, please see the third +reference. #+BEGIN_QUOTE Reliable and Randomized Data Distribution Strategies for Large Scale Storage Systems @@ -80,22 +98,11 @@ Random Slicing: Efficient and Scalable Data Placement for Large-Scale Alberto Miranda et al. DOI: http://dx.doi.org/10.1145/2632230 (long version, ACM Transactions on Storage, Vol. 10, No. 3, Article 9, 2014) + +[[http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration][Hibari Sysadmin Guide, chain migration section]]. +http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration #+END_QUOTE -In general, random slicing says: - -- Hash a string onto the unit interval [0.0, 1.0) -- Calculate h(unit interval point, Map) -> bin, where ~Map~ partitions - the unit interval into bins. - -Our adaptation is in step 1: we do not hash any strings. Instead, we -simply choose a number on the unit interval. This number is called -the "cluster locator". - -As described later in this doc, Machi file names are structured into -several components. One component of the file name contains the "cluster -locator"; we use the number as-is for step 2 above. - * 3. A simple illustration We use a variation of the Random Slicing hash that we will call @@ -127,25 +134,28 @@ Assume that we have a random slicing map called ~Map~. This particular | 0.66 - 0.91 | Chain3 | | 0.91 - 1.00 | Chain4 | -Assume that the system chooses a chain locator of 0.05. +Assume that the system chooses a cluster locator of 0.05. According to ~Map~, the value of ~rs_hash_with_float(0.05,Map) = Chain1~. Similarly, ~rs_hash_with_float(0.26,Map) = Chain4~. +This example should look very similar to Hibari's technique. +The Hibari documentation has a brief photo illustration of how random +slicing works, see [[http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration][Hibari Sysadmin Guide, chain migration]]. + * 4. Use of the cluster namespace: name separation plus chain type Let us assume that the cluster framework provides several different types of chains: -| | | Consistency | | -| Chain length | Namespace | Mode | Comment | -|--------------+------------+-------------+----------------------------------| -| 3 | normal | eventual | Normal storage redundancy & cost | -| 2 | reduced | eventual | Reduced cost storage | -| 1 | risky | eventual | Really, really cheap storage | -| 7 | paranoid | eventual | Safety-critical storage | -| 3 | sequential | strong | Strong consistency | -|--------------+------------+-------------+----------------------------------| +| Chain length | Namespace | Consistency Mode | Comment | +|--------------+--------------+------------------+----------------------------------| +| 3 | ~normal~ | eventual | Normal storage redundancy & cost | +| 2 | ~reduced~ | eventual | Reduced cost storage | +| 1 | ~risky~ | eventual | Really, really cheap storage | +| 7 | ~paranoid~ | eventual | Safety-critical storage | +| 3 | ~sequential~ | strong | Strong consistency | +|--------------+--------------+------------------+----------------------------------| The client may want to choose the amount of redundancy that its application requires: normal, reduced cost, or perhaps even a single @@ -155,17 +165,17 @@ intention. Further, the cluster administrators may wish to use the namespace to provide separate storage for different applications. Jane's application may use the namespace "jane-normal" and Bob's app uses -"bob-reduced". Administrators may definite separate groups of +"bob-reduced". Administrators may definine separate groups of chains on separate servers to serve these two applications. * 5. In its lifetime, a file may be moved to different chains The cluster management scheme may decide that files need to migrate to -other chains. The reason could be for storage load or I/O load -balancing reasons. It could be because a chain is being -decommissioned by its owners. There are many legitimate reasons why a -file that is initially created on chain ID X has been moved to -chain ID Y. +other chains -- i.e., file that is initially created on chain ID ~X~ +has been moved to chain ID ~Y~. + ++ For storage load or I/O load balancing reasons. ++ Because a chain is being decommissioned by the sysadmin. * 6. Floating point is not required ... it is merely convenient for explanation @@ -260,7 +270,7 @@ implementation. Other protocols, such as HTTP, will be added later. such as disk utilization percentage. 2. Cluster bridge knows the cluster ~Map~ for namespace ~N~. 3. Cluster bridge choose some cluster locator value ~L~ such that - ~rs_hash_with_float(L,Map) = T~ (see below). + ~rs_hash_with_float(L,Map) = T~ (see algorithm below). 4. Cluster bridge sends its request to chain ~T~: ~append_chunk(p,L,N,...) -> {ok,p^L^N^u,ByteOffset}~ 5. Cluster bridge forwards the reply tuple to the client. @@ -278,7 +288,7 @@ implementation. Other protocols, such as HTTP, will be added later. ~read_chunk(F,...) ->~ ... reply 5. Cluster bridge forwards the reply to the client. -** The details: calculating 'L' (the Cluster locator) to match a desired target chain +** The details: calculating 'L' (the cluster locator number) to match a desired target chain 1. We know ~Map~, the current cluster mapping for a cluster namespace ~N~. 2. We look inside of ~Map~, and we find all of the unit interval ranges @@ -287,21 +297,13 @@ implementation. Other protocols, such as HTTP, will be added later. 3. In our example, ~T=Chain2~. The example ~Map~ contains a single unit interval range for ~Chain2~, ~[(0.33,0.58]]~. 4. Choose a uniformly random number ~r~ on the unit interval. -5. Calculate locator ~L~ by mapping ~r~ onto the concatenation +5. Calculate the cluster locator ~L~ by mapping ~r~ onto the concatenation of the cluster hash space range intervals in ~MapList~. For example, if ~r=0.5~, then ~L = 0.33 + 0.5*(0.58-0.33) = 0.455~, which is exactly in the middle of the ~(0.33,0.58]~ interval. ** A bit more about the cluster namespaces's meaning and use -- The cluster framework will provide means of creating and managing - chains of different types, e.g., chain length, consistency mode. -- The cluster framework will manage the mapping of cluster namespace - names to the chains in the system. -- The cluster framework will provide query functions to map a cluster - namespace name to a cluster map, - e.g. ~get_cluster_latest_map("reduced") -> Map{generation=7,...}~. - For use by Riak CS, for example, we'd likely start with the following namespaces ... working our way down the list as we add new features and/or re-implement existing CS features. @@ -312,6 +314,16 @@ and/or re-implement existing CS features. use this namespace for the metadata required to re-implement the operations that are performed by today's Stanchion application. +We want the cluster framework to: + +- provide means of creating and managing + chains of different types, e.g., chain length, consistency mode. +- manage the mapping of cluster namespace + names to the chains in the system. +- provide query functions to map a cluster + namespace name to a cluster map, + e.g. ~get_cluster_latest_map("reduced") -> Map{generation=7,...}~. + * 8. File migration (a.k.a. rebalancing/reparitioning/resharding/redistribution) ** What is "migration"? @@ -332,7 +344,7 @@ get full, hardware will change, read workload will fluctuate, etc etc. This document uses the word "migration" to describe moving data from -one Machi chain to another within a cluster system. +one Machi chain to another chain within a cluster system. A simple variation of the Random Slicing hash algorithm can easily accommodate Machi's need to migrate files without interfering with @@ -433,9 +445,9 @@ The HibariDB system performs data migrations in almost exactly this manner. However, one important limitation of HibariDB is not being able to perform more than one migration at a time. HibariDB's data is -mutable, and mutation causes many problems already when migrating data +mutable. Mutation causes many problems when migrating data across two submaps; three or more submaps was too complex to implement -quickly. +quickly and correctly. Fortunately for Machi, its file data is immutable and therefore can easily manage many migrations in parallel, i.e., its submap list may @@ -450,15 +462,15 @@ file for any prefix, as long as all prerequisites are also true, - The epoch has not changed. (In AP mode, epoch change -> mandatory file name suffix change.) -- The locator number is stable. +- The cluster locator number is stable. - The latest file for prefix ~p~ is smaller than maximum file size for a FLU's configuration. -The stability of the locator number is an implementation detail that +The stability of the cluster locator number is an implementation detail that must be managed by the cluster bridge. Reuse of the same file is not possible if the bridge always chooses a -different locator number ~L~ or if the client always uses a unique +different cluster locator number ~L~ or if the client always uses a unique file prefix ~p~. The latter is a sign of a misbehaved client; the former is a poorly-implemented bridge. -- 2.45.2 From 67dad7fb8a79dce091a964ea2c577721adc803d1 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 15 Feb 2016 17:51:08 +0900 Subject: [PATCH 28/53] Fix dialyzer warnings --- src/machi_cr_client.erl | 7 ++++--- src/machi_flu_filename_mgr.erl | 6 ------ 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/src/machi_cr_client.erl b/src/machi_cr_client.erl index b36c6ca..a726744 100644 --- a/src/machi_cr_client.erl +++ b/src/machi_cr_client.erl @@ -725,8 +725,9 @@ read_repair2(ap_mode=ConsistencyMode, {ok, {Chunks, _Trimmed}, GotItFrom} when is_list(Chunks) -> %% TODO: Repair trimmed chunks ToRepair = mutation_flus(P) -- [GotItFrom], - {reply, Reply, S1} = do_repair_chunks(Chunks, ToRepair, ReturnMode, [GotItFrom], - NSInfo, File, Depth, STime, S, {ok, Chunks}), + Reply = {ok, {Chunks, []}}, + {Reply, S1} = do_repair_chunks(Chunks, ToRepair, ReturnMode, [GotItFrom], + NSInfo, File, Depth, STime, S, Reply), {reply, Reply, S1}; {error, bad_checksum}=BadCS -> %% TODO: alternate strategy? @@ -753,7 +754,7 @@ do_repair_chunks([{_, Offset, Chunk, CSum}|T], Size = iolist_size(Chunk), case do_repair_chunk(ToRepair, ReturnMode, Chunk, CSum, [GotItFrom], NSInfo, File, Offset, Size, Depth, STime, S) of - {ok, Chunk, S1} -> + {reply, {ok, _}, S1} -> do_repair_chunks(T, ToRepair, ReturnMode, [GotItFrom], NSInfo, File, Depth, STime, S1, Reply); Error -> Error diff --git a/src/machi_flu_filename_mgr.erl b/src/machi_flu_filename_mgr.erl index ec66031..b25d146 100644 --- a/src/machi_flu_filename_mgr.erl +++ b/src/machi_flu_filename_mgr.erl @@ -191,12 +191,6 @@ generate_uuid_v4_str() -> io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b", [A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]). -find_file(DataDir, #ns_info{name=NS, locator=NSLocator}=_NSInfo, Prefix, N) -> - {_Filename, Path} = machi_util:make_data_filename(DataDir, - NS, NSLocator, - Prefix, "*", N), - filelib:wildcard(Path). - list_files(DataDir, Prefix) -> {F_bin, Path} = machi_util:make_data_filename(DataDir, "*^" ++ Prefix ++ "^*"), filelib:wildcard(binary_to_list(F_bin), filename:dirname(Path)). -- 2.45.2 From ed56a2c6cfb688b3051298bfcd3de4c4af938514 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 19 Feb 2016 14:49:24 +0900 Subject: [PATCH 29/53] Fix 'ranch' app dependency upon re-start w/FLUs configured ... and allow direct start by machi_sup for EUnit tests. --- src/machi.app.src | 4 ++-- src/machi_sup.erl | 10 ++++++++-- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/src/machi.app.src b/src/machi.app.src index c26154f..a9f96f0 100644 --- a/src/machi.app.src +++ b/src/machi.app.src @@ -1,7 +1,7 @@ {application, machi, [ {description, "A village of write-once files."}, - {vsn, "0.0.0"}, - {applications, [kernel, stdlib, crypto, cluster_info]}, + {vsn, "0.0.1"}, + {applications, [kernel, stdlib, crypto, cluster_info, ranch]}, {mod,{machi_app,[]}}, {registered, []}, {env, [ diff --git a/src/machi_sup.erl b/src/machi_sup.erl index 6cf7695..f7ddd10 100644 --- a/src/machi_sup.erl +++ b/src/machi_sup.erl @@ -65,5 +65,11 @@ init([]) -> LifecycleMgr = {machi_lifecycle_mgr, {machi_lifecycle_mgr, start_link, []}, Restart, Shutdown, worker, []}, - - {ok, {SupFlags, [ServerSup, RanchSup, LifecycleMgr]}}. + RunningApps = [A || {A,_D,_V} <- application:which_applications()], + Specs = case lists:member(ranch, RunningApps) of + true -> + [ServerSup, LifecycleMgr]; + false -> + [ServerSup, RanchSup, LifecycleMgr] + end, + {ok, {SupFlags, Specs}}. -- 2.45.2 From affad6b1d31ccf0d37c7c84792dfb7f650b2c490 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 19 Feb 2016 14:56:53 +0900 Subject: [PATCH 30/53] Specify short timeout to ?FLU_PC:kick_projection_reaction() call --- src/machi_chain_manager1.erl | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index bdc142d..1436f44 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -2094,11 +2094,10 @@ react_to_env_C200(Retries, P_latest, S) -> ?REACT(c200), try AuthorProxyPid = proxy_pid(P_latest#projection_v1.author_server, S), - ?FLU_PC:kick_projection_reaction(AuthorProxyPid, []) + %% This is just advisory, we don't need a sync reply. + ?FLU_PC:kick_projection_reaction(AuthorProxyPid, [], 100) catch _Type:_Err -> - %% ?V("TODO: tell_author_yo is broken: ~p ~p\n", - %% [_Type, _Err]), - ok + ok end, react_to_env_C210(Retries, S). -- 2.45.2 From d5c3da78fb56666a40126c3212f1901b24173d41 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 19 Feb 2016 15:29:17 +0900 Subject: [PATCH 31/53] Change 'COMMIT epoch' logging & chain mgr options --- rel/files/app.config | 4 ++ src/machi_chain_manager1.erl | 42 ++++++++++----------- test/machi_chain_manager1_converge_demo.erl | 1 + 3 files changed, 26 insertions(+), 21 deletions(-) diff --git a/rel/files/app.config b/rel/files/app.config index eb330f3..a2c55ee 100644 --- a/rel/files/app.config +++ b/rel/files/app.config @@ -16,6 +16,10 @@ %% Default = 10 %% {metadata_manager_count, 2}, + %% Default options for chain manager processes. + %% {chain_manager_opts, [{private_write_verbose,true}, + %% {private_write_verbose_confirm,true}]}, + %% Platform vars (mirror of reltool packaging) {platform_data_dir, "{{platform_data_dir}}"}, {platform_etc_dir, "{{platform_etc_dir}}"}, diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 1436f44..4c826a1 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -234,11 +234,13 @@ test_read_latest_public_projection(Pid, ReadRepairP) -> %% manager's pid in MgrOpts and use direct gen_server calls to the %% local projection store. -init({MyName, InitMembersDict, MgrOpts}) -> +init({MyName, InitMembersDict, MgrOpts0}) -> put(ttt, [?LINE]), _ = random:seed(now()), init_remember_down_list(), + MgrOpts = MgrOpts0 ++ application:get_env(machi, chain_manager_opts, []), Opt = fun(Key, Default) -> proplists:get_value(Key, MgrOpts, Default) end, + InitWitness_list = Opt(witnesses, []), ZeroAll_list = [P#p_srvr.name || {_,P} <- orddict:to_list(InitMembersDict)], ZeroProj = make_none_projection(0, MyName, ZeroAll_list, @@ -2060,7 +2062,6 @@ react_to_env_C120(P_latest, FinalProps, #ch_mgr{proj_history=H, ?REACT(c120), H2 = add_and_trunc_history(P_latest, H, ?MAX_HISTORY_LENGTH), - %% diversion_c120_verbose_goop(P_latest, S), ?REACT({c120, [{latest, machi_projection:make_summary(P_latest)}]}), S2 = set_proj(S#ch_mgr{proj_history=H2, sane_transitions=Xtns + 1}, P_latest), @@ -2487,9 +2488,9 @@ poll_private_proj_is_upi_unanimous3(#ch_mgr{name=MyName, proj=P_current} = S) -> upi=_UPIRep, repairing=_RepairingRep} = NewProj, ok = machi_projection_store:write(ProjStore, private, NewProj), - case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of + case proplists:get_value(private_write_verbose_confirm, S#ch_mgr.opts) of true -> - io:format(user, "\n~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), _EpochRep, _CSumRep, _UPIRep, _RepairingRep, MyName]); + error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [_EpochRep, _CSumRep, _UPIRep, _RepairingRep, MyName]); _ -> ok end, @@ -2965,34 +2966,33 @@ zerf_find_last_annotated(FLU, MajoritySize, S) -> [] % lists:flatten() will destroy end. -perhaps_verbose_c111(P_latest2, S) -> - case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of - true -> +perhaps_verbose_c111(P_latest2, #ch_mgr{name=MyName, opts=Opts}=S) -> + PrivWriteVerb = proplists:get_value(private_write_verbose, Opts, false), + PrivWriteVerbCONFIRM = proplists:get_value(private_write_verbose_confirm, Opts, false), + if PrivWriteVerb orelse PrivWriteVerbCONFIRM -> Dbg2X = lists:keydelete(react, 1, P_latest2#projection_v1.dbg2) ++ [{is_annotated,is_annotated(P_latest2)}], P_latest2x = P_latest2#projection_v1{dbg2=Dbg2X}, % limit verbose len. Last2 = get(last_verbose), Summ2 = machi_projection:make_summary(P_latest2x), - if P_latest2#projection_v1.upi == [], - (S#ch_mgr.proj)#projection_v1.upi /= [] -> - <> = - P_latest2#projection_v1.epoch_csum, - io:format(user, "~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), (S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]); + if PrivWriteVerb, Summ2 /= Last2 -> + put(last_verbose, Summ2), + ?V("\n~s ~p uses plain: ~w \n", + [machi_util:pretty_time(), MyName, Summ2]); true -> ok end, - case proplists:get_value(private_write_verbose, - S#ch_mgr.opts) of - %% case true of - true when Summ2 /= Last2 -> - put(last_verbose, Summ2), - ?V("\n~s ~p uses plain: ~w \n", - [machi_util:pretty_time(), S#ch_mgr.name, Summ2]); - _ -> + if PrivWriteVerbCONFIRM, + P_latest2#projection_v1.upi == [], + (S#ch_mgr.proj)#projection_v1.upi /= [] -> + <> = + P_latest2#projection_v1.epoch_csum, + error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [(S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]); + true -> ok end; - _ -> + true -> ok end. diff --git a/test/machi_chain_manager1_converge_demo.erl b/test/machi_chain_manager1_converge_demo.erl index cee7a78..4ef1cfa 100644 --- a/test/machi_chain_manager1_converge_demo.erl +++ b/test/machi_chain_manager1_converge_demo.erl @@ -134,6 +134,7 @@ Press control-c to interrupt the test....". %% convergence_demo_testfun(3). -define(DEFAULT_MGR_OPTS, [{private_write_verbose, false}, + {private_write_verbose_confirm, true}, {active_mode,false}, {use_partition_simulator, true}]). -- 2.45.2 From 0f543b4c4d805caa988029e7f4a6d44b5f3a4594 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 19 Feb 2016 16:30:18 +0900 Subject: [PATCH 32/53] Add author_server to CONFIRM messages --- src/machi_chain_manager1.erl | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 4c826a1..65e6a69 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -2485,12 +2485,13 @@ poll_private_proj_is_upi_unanimous3(#ch_mgr{name=MyName, proj=P_current} = S) -> ProjStore = get_projection_store_pid_or_regname(S), #projection_v1{epoch_number=_EpochRep, epoch_csum= <<_CSumRep:4/binary,_/binary>>, + author_server=AuthRep, upi=_UPIRep, repairing=_RepairingRep} = NewProj, ok = machi_projection_store:write(ProjStore, private, NewProj), case proplists:get_value(private_write_verbose_confirm, S#ch_mgr.opts) of true -> - error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [_EpochRep, _CSumRep, _UPIRep, _RepairingRep, MyName]); + error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w auth ~w by ~w\n", [_EpochRep, _CSumRep, _UPIRep, _RepairingRep, AuthRep, MyName]); _ -> ok end, @@ -2988,7 +2989,7 @@ perhaps_verbose_c111(P_latest2, #ch_mgr{name=MyName, opts=Opts}=S) -> (S#ch_mgr.proj)#projection_v1.upi /= [] -> <> = P_latest2#projection_v1.epoch_csum, - error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [(S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]); + error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w auth ~w by ~w\n", [(S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, P_latest2#projection_v1.author_server, S#ch_mgr.name]); true -> ok end; -- 2.45.2 From 2e46d199c8e540f8d0678650734d14dd527d11df Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 19 Feb 2016 16:38:43 +0900 Subject: [PATCH 33/53] Export csum_tag() type --- src/machi_dt.erl | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/src/machi_dt.erl b/src/machi_dt.erl index 6a57e86..0af3bb4 100644 --- a/src/machi_dt.erl +++ b/src/machi_dt.erl @@ -32,6 +32,12 @@ -type chunk_summary() :: {file_offset(), chunk_size(), chunk_bin(), chunk_cstrm()}. -type chunk_pos() :: {file_offset(), chunk_size(), file_name_s()}. -type chunk_size() :: non_neg_integer(). + +%% Tags that stand for how that checksum was generated. See +%% machi_util:make_tagged_csum/{1,2} for further documentation and +%% implementation. +-type csum_tag() :: none | client_sha | server_sha | server_regen_sha. + -type error_general() :: 'bad_arg' | 'wedged' | 'bad_checksum'. -type epoch_csum() :: binary(). -type epoch_num() :: -1 | non_neg_integer(). @@ -53,11 +59,6 @@ -type read_opts() :: #read_opts{}. -type read_opts_x() :: 'undefined' | 'noopt' | 'none' | #read_opts{}. -%% Tags that stand for how that checksum was generated. See -%% machi_util:make_tagged_csum/{1,2} for further documentation and -%% implementation. --type csum_tag() :: none | client_sha | server_sha | server_regen_sha. - -export_type([ append_opts/0, chunk/0, @@ -68,6 +69,7 @@ chunk_summary/0, chunk_pos/0, chunk_size/0, + csum_tag/0, error_general/0, epoch_csum/0, epoch_num/0, -- 2.45.2 From 53ce6d89dd9febabf9e32c760b8d5aa80bcd2d93 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 19 Feb 2016 18:02:56 +0900 Subject: [PATCH 34/53] Add verbose() option to machi_fitness --- src/machi_fitness.erl | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/src/machi_fitness.erl b/src/machi_fitness.erl index 2b54244..70af62a 100644 --- a/src/machi_fitness.erl +++ b/src/machi_fitness.erl @@ -108,6 +108,7 @@ handle_call({update_local_down_list, Down, MembersDict}, _From, #state{my_flu_name=MyFluName, pending_map=OldMap, local_down=OldDown, members_dict=OldMembersDict, admin_down=AdminDown}=S) -> + verbose("FITNESS: ~w has down suspect ~w\n", [MyFluName, Down]), NewMap = store_in_map(OldMap, MyFluName, erlang:now(), Down, AdminDown, [props_yo]), S2 = if Down == OldDown, MembersDict == OldMembersDict -> @@ -119,13 +120,17 @@ handle_call({update_local_down_list, Down, MembersDict}, _From, end, {reply, ok, S2#state{local_down=Down}}; handle_call({add_admin_down, DownFLU, DownProps}, _From, - #state{local_down=OldDown, admin_down=AdminDown}=S) -> + #state{my_flu_name=MyFluName, + local_down=OldDown, admin_down=AdminDown}=S) -> + verbose("FITNESS: ~w add admin down ~w\n", [MyFluName, DownFLU]), NewAdminDown = [{DownFLU,DownProps}|lists:keydelete(DownFLU, 1, AdminDown)], S3 = finish_admin_down(erlang:now(), OldDown, NewAdminDown, [props_yo], S), {reply, ok, S3}; handle_call({delete_admin_down, DownFLU}, _From, - #state{local_down=OldDown, admin_down=AdminDown}=S) -> + #state{my_flu_name=MyFluName, + local_down=OldDown, admin_down=AdminDown}=S) -> + verbose("FITNESS: ~w delete admin down ~w\n", [MyFluName, DownFLU]), NewAdminDown = lists:keydelete(DownFLU, 1, AdminDown), S3 = finish_admin_down(erlang:now(), OldDown, NewAdminDown, [props_yo], S), @@ -143,7 +148,8 @@ handle_call(_Request, _From, S) -> handle_cast(_Msg, S) -> {noreply, S}. -handle_info({adjust_down_list, FLU}, #state{active_unfit=ActiveUnfit}=S) -> +handle_info({adjust_down_list, FLU}, #state{my_flu_name=MyFluName, + active_unfit=ActiveUnfit}=S) -> NewUnfit = make_unfit_list(S), Added_to_new = NewUnfit -- ActiveUnfit, Dropped_from_new = ActiveUnfit -- NewUnfit, @@ -184,9 +190,11 @@ handle_info({adjust_down_list, FLU}, #state{active_unfit=ActiveUnfit}=S) -> {true, true} -> error({bad, ?MODULE, ?LINE, FLU, ActiveUnfit, NewUnfit}); {true, false} -> - {noreply, S#state{active_unfit=lists:usort(ActiveUnfit ++ [FLU])}}; + NewActive = wrap_active(MyFluName,lists:usort(ActiveUnfit++[FLU])), + {noreply, S#state{active_unfit=NewActive}}; {false, true} -> - {noreply, S#state{active_unfit=ActiveUnfit -- [FLU]}}; + NewActive = wrap_active(MyFluName,ActiveUnfit--[FLU]), + {noreply, S#state{active_unfit=NewActive}}; {false, false} -> {noreply, S} end; @@ -424,6 +432,18 @@ map_value(Map) -> map_merge(Map1, Map2) -> ?MAP:merge(Map1, Map2). +wrap_active(MyFluName, L) -> + verbose("FITNESS: ~w has new down list ~w\n", [MyFluName, L]), + L. + +verbose(Fmt, Args) -> + case application:get_env(machi, fitness_verbose) of + {ok, true} -> + error_logger:info_msg(Fmt, Args); + _ -> + ok + end. + -ifdef(TEST). dt_understanding_test() -> -- 2.45.2 From 1d8bc198918d277b63140042d8fe11775548267e Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 22 Feb 2016 16:48:02 +0900 Subject: [PATCH 35/53] Fix repair-is-finished-but-message-not-consumed DoS during peer SIGSTOP --- src/machi_chain_manager1.erl | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 65e6a69..18d92a3 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -390,6 +390,7 @@ handle_cast(_Cast, S) -> handle_info(tick_check_environment, #ch_mgr{ignore_timer=true}=S) -> {noreply, S}; handle_info(tick_check_environment, S) -> + gobble_ticks(), {{_Delta, Props, _Epoch}, S1} = do_react_to_env(S), S2 = sanitize_repair_state(S1), S3 = perhaps_start_repair(S2), @@ -2538,6 +2539,14 @@ gobble_calls(StaticCall) -> ok end. +gobble_ticks() -> + receive + tick_check_environment -> + gobble_ticks() + after 0 -> + ok + end. + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% perhaps_start_repair(#ch_mgr{name=MyName, -- 2.45.2 From c02a0bed70fef037b49aa3b6f6144dd22384c04c Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Mon, 22 Feb 2016 17:03:50 +0900 Subject: [PATCH 36/53] Change 'uses' verbose message to error_logger:info --- src/machi_chain_manager1.erl | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 18d92a3..4ab5a55 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -2988,8 +2988,8 @@ perhaps_verbose_c111(P_latest2, #ch_mgr{name=MyName, opts=Opts}=S) -> Summ2 = machi_projection:make_summary(P_latest2x), if PrivWriteVerb, Summ2 /= Last2 -> put(last_verbose, Summ2), - ?V("\n~s ~p uses plain: ~w \n", - [machi_util:pretty_time(), MyName, Summ2]); + error_logger:info_msg("~p uses plain: ~w \n", + [MyName, Summ2]); true -> ok end, -- 2.45.2 From 34f8632f194241cffac31b9dcde9aef031cbf6ac Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 23 Feb 2016 15:06:33 +0900 Subject: [PATCH 37/53] Add ranch startup to machi_chain_manager1_converge_demo --- test/machi_chain_manager1_converge_demo.erl | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/test/machi_chain_manager1_converge_demo.erl b/test/machi_chain_manager1_converge_demo.erl index 4ef1cfa..c1299cd 100644 --- a/test/machi_chain_manager1_converge_demo.erl +++ b/test/machi_chain_manager1_converge_demo.erl @@ -151,7 +151,8 @@ convergence_demo_testfun(NumFLUs, MgrOpts0) -> %% Faster test startup, commented: io:format(user, short_doc(), []), %% Faster test startup, commented: timer:sleep(3000), - application:start(sasl), + Apps = [sasl, ranch], + [application:start(App) || App <- Apps], MgrOpts = MgrOpts0 ++ ?DEFAULT_MGR_OPTS, TcpPort = proplists:get_value(port_base, MgrOpts, 62877), @@ -394,7 +395,8 @@ timer:sleep(1234), exit(SupPid, normal), ok = machi_partition_simulator:stop(), [ok = ?FLU_PC:quit(PPid) || {_, PPid} <- Namez], - machi_util:wait_for_death(SupPid, 100) + machi_util:wait_for_death(SupPid, 100), + [application:start(App) || App <- lists:reverse(Apps)] end. %% Many of the static partition lists below have been problematic at one -- 2.45.2 From a27425147dabf10e22d4453b37a678916b40f5cd Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 23 Feb 2016 15:07:16 +0900 Subject: [PATCH 38/53] Re-add a flapping check, but also take advantage of confirmed accepted epoch --- src/machi_chain_manager1.erl | 39 +++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 4ab5a55..15b82ab 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -92,8 +92,11 @@ -define(REPAIR_START_STABILITY_TIME, 10). -endif. % TEST -%% Magic constant for looping "too frequently" breaker. TODO revisit & revise. --define(TOO_FREQUENT_BREAKER, 10). +%% Maximum length of the history of adopted projections (via C120). +-define(MAX_HISTORY_LENGTH, 8). + +%% Magic constant for looping "too frequently" breaker. +-define(TOO_FREQUENT_BREAKER, (?MAX_HISTORY_LENGTH+5)). -define(RETURN2(X), begin (catch put(why2, [?LINE|get(why2)])), X end). @@ -103,9 +106,6 @@ %% Amount of epoch number skip-ahead for set_chain_members call -define(SET_CHAIN_MEMBERS_EPOCH_SKIP, 1111). -%% Maximum length of the history of adopted projections (via C120). --define(MAX_HISTORY_LENGTH, 30). - %% API -export([start_link/2, start_link/3, stop/1, ping/1, set_chain_members/2, set_chain_members/6, set_active/2, @@ -463,7 +463,7 @@ get_my_proj_boot_info(MgrOpts, DefaultDict, DefaultProj, ProjType) -> {DefaultDict, DefaultProj}; Store -> {ok, P} = machi_projection_store:read_latest_projection(Store, - ProjType), + ProjType, 7789), {P#projection_v1.members_dict, P} end. @@ -840,7 +840,10 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg, D_foo=[{repair_done, {repair_final_status, ok, (S#ch_mgr.proj)#projection_v1.epoch_number}}], {NewUPI_list ++ Repairing_list2, [], RunEnv2}; true -> - D_foo=[d_foo2], + D_foo=[d_foo2, {sim_p,Simulator_p}, + {simr_p,SimRepair_p}, {same_epoch,SameEpoch_p}, + {rel_to,RelativeToServer}, + {repch,RepChk_LastInUPI}, {repair_fs,RepairFS}], {NewUPI_list, OldRepairing_list, RunEnv2} end; {_ABC, _XYZ} -> @@ -1977,7 +1980,7 @@ react_to_env_C110(P_latest, #ch_mgr{name=MyName} = S) -> %% In contrast to the public projection store writes, Humming Consensus %% doesn't care about the status of writes to the public store: it's %% always relying only on successful reads of the public store. - case {?FLU_PC:write_projection(MyStorePid, private, P_latest2,?TO*30),Goo} of + case {?FLU_PC:write_projection(MyStorePid, private, P_latest2,?TO*30+66),Goo} of {ok, Goo} -> ?REACT({c110, [{write, ok}]}), react_to_env_C111(P_latest, P_latest2, Extra1, MyStorePid, S); @@ -2070,20 +2073,21 @@ react_to_env_C120(P_latest, FinalProps, #ch_mgr{proj_history=H, false -> S2; {{_ConfEpoch, _ConfCSum}, ConfTime} -> - io:format(user, "\nCONFIRM debug C120 ~w was annotated ~w\n", [S#ch_mgr.name, P_latest#projection_v1.epoch_number]), + P_latestEpoch = P_latest#projection_v1.epoch_number, + io:format(user, "\nCONFIRM debug C120 ~w was annotated ~w\n", [S#ch_mgr.name, P_latestEpoch]), S2#ch_mgr{proj_unanimous=ConfTime} end, V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end, if V -> io:format("C120: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end, {{now_using, FinalProps, P_latest#projection_v1.epoch_number}, S3}. -add_and_trunc_history(P_latest, H, MaxLength) -> +add_and_trunc_history(#projection_v1{epoch_number=0}, H, _MaxLength) -> + H; +add_and_trunc_history(#projection_v1{} = P_latest, H, MaxLength) -> Latest_U_R = {P_latest#projection_v1.upi, P_latest#projection_v1.repairing}, - H2 = if P_latest#projection_v1.epoch_number > 0 -> - queue:in(Latest_U_R, H); - true -> - H - end, + add_and_trunc_history(Latest_U_R, H, MaxLength); +add_and_trunc_history(Item, H, MaxLength) -> + H2 = queue:in(Item, H), case queue:len(H2) of X when X > MaxLength -> {_V, Hxx} = queue:out(H2), @@ -2499,7 +2503,10 @@ poll_private_proj_is_upi_unanimous3(#ch_mgr{name=MyName, proj=P_current} = S) -> %% Unwedge our FLU. {ok, NotifyPid} = machi_projection_store:get_wedge_notify_pid(ProjStore), _ = machi_flu1:update_wedge_state(NotifyPid, false, EpochID), - S2#ch_mgr{proj_unanimous=Now}; + #ch_mgr{proj_history=H} = S2, + H2 = add_and_trunc_history({confirm, Epoch}, H, + ?MAX_HISTORY_LENGTH), + S2#ch_mgr{proj_unanimous=Now, proj_history=H2}; _ -> S2 end; -- 2.45.2 From 11921d82bf3a2b5a225247adcc03c3c6eb047c1d Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Tue, 23 Feb 2016 17:30:30 +0900 Subject: [PATCH 39/53] WIP: start of demo doc --- .gitignore | 1 + Makefile | 33 ++++++- README.md | 11 ++- ...erge_demo.md => humming_consensus_demo.md} | 97 ++++++++++++++----- rel/gen_dev | 16 +++ rel/vars.config | 3 + rel/vars/dev_vars.config.src | 48 +++++++++ 7 files changed, 182 insertions(+), 27 deletions(-) rename doc/{machi_chain_manager1_converge_demo.md => humming_consensus_demo.md} (76%) create mode 100755 rel/gen_dev create mode 100644 rel/vars/dev_vars.config.src diff --git a/.gitignore b/.gitignore index 3af54ff..ef440c4 100644 --- a/.gitignore +++ b/.gitignore @@ -2,6 +2,7 @@ prototype/chain-manager/patch.* .eqc-info .eunit deps +dev erl_crash.dump .concrete/DEV_MODE .rebar diff --git a/Makefile b/Makefile index 7ff19ed..01b1e99 100644 --- a/Makefile +++ b/Makefile @@ -10,7 +10,7 @@ endif OVERLAY_VARS ?= EUNIT_OPTS = -v -.PHONY: rel deps package pkgclean edoc +.PHONY: rel stagedevrel deps package pkgclean edoc all: deps compile @@ -57,6 +57,37 @@ relclean: stage : rel $(foreach dep,$(wildcard deps/*), rm -rf rel/$(REPO)/lib/$(shell basename $(dep))* && ln -sf $(abspath $(dep)) rel/$(REPO)/lib;) +## +## Developer targets +## +## devN - Make a dev build for node N +## stagedevN - Make a stage dev build for node N (symlink libraries) +## devrel - Make a dev build for 1..$DEVNODES +## stagedevrel Make a stagedev build for 1..$DEVNODES +## +## Example, make a 68 node devrel cluster +## make stagedevrel DEVNODES=68 + +.PHONY : stagedevrel devrel +DEVNODES ?= 3 + +# 'seq' is not available on all *BSD, so using an alternate in awk +SEQ = $(shell awk 'BEGIN { for (i = 1; i < '$(DEVNODES)'; i++) printf("%i ", i); print i ;exit(0);}') + +$(eval stagedevrel : $(foreach n,$(SEQ),stagedev$(n))) +$(eval devrel : $(foreach n,$(SEQ),dev$(n))) + +dev% : all + mkdir -p dev + rel/gen_dev $@ rel/vars/dev_vars.config.src rel/vars/$@_vars.config + (cd rel && ../rebar generate target_dir=../dev/$@ overlay_vars=vars/$@_vars.config) + +stagedev% : dev% + $(foreach dep,$(wildcard deps/*), rm -rf dev/$^/lib/$(shell basename $(dep))* && ln -sf $(abspath $(dep)) dev/$^/lib;) + +devclean: clean + rm -rf dev + DIALYZER_APPS = kernel stdlib sasl erts ssl compiler eunit crypto public_key syntax_tools PLT = $(HOME)/.machi_dialyzer_plt diff --git a/README.md b/README.md index 28f77d2..37db1e0 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,9 @@ Humming Consensus" is available online now. * [slides (PDF format)](http://ricon.io/speakers/slides/Scott_Fritchie_Ricon_2015.pdf) * [video](https://www.youtube.com/watch?v=yR5kHL1bu1Q) +See later in this document for how to run the Humming Consensus demos, +including the network partition simulator. + ## 3. Development status summary @@ -99,10 +102,10 @@ Mid-December 2015: work is underway. * The Erlang language client implementation of the high-level protocol flavor is brittle (e.g., little error handling yet). -If you would like to run the network partition simulator -mentioned in the Ricon 2015 presentation about Humming Consensus, -please see the -[partition simulator convergence test doc.](./doc/machi_chain_manager1_converge_demo.md) +If you would like to run the Humming Consensus code (with or without +the network partition simulator) as described in the RICON 2015 +presentation, please see the +[Humming Consensus demo doc.](./doc/humming_consensus_demo.md). If you'd like to work on a protocol such as Thrift, UBF, msgpack over UDP, or some other protocol, let us know by diff --git a/doc/machi_chain_manager1_converge_demo.md b/doc/humming_consensus_demo.md similarity index 76% rename from doc/machi_chain_manager1_converge_demo.md rename to doc/humming_consensus_demo.md index 2844bfa..eb66ebe 100644 --- a/doc/machi_chain_manager1_converge_demo.md +++ b/doc/humming_consensus_demo.md @@ -1,6 +1,75 @@ +# Table of contents + +* [Hand-on experiments with Machi and Humming Consensus](#hands-on) +* [Using the network partition simulator and convergence demo test code](#partition-simulator) + + +# Hand-on experiments with Machi and Humming Consensus + + +## Prerequisites + +1. Machi requires a OS X, FreeBSD, Linux, or Solaris machine. +2. You'll need the `git` source management utility. +3. You'll need the Erlang/OTP 17 runtime environment. Please don't + use earlier or later versions until we have a chance to fix the + compilation warnings that versions R16B and 18 will trigger. + +For `git` and the Erlang runtime, please use your OS-specific +package manager to install these. If your package manager doesn't +have Erlang/OTP version 17 available, then we recommend using the +[precompiled packages available at Erlang Solutions](https://www.erlang-solutions.com/resources/download.html). + +All of the commands that should be run at your login shell (e.g. Bash, +c-shell) can be cut-and-pasted from this document directly to your +login shell prompt. + + +## Clone and compile the code + +Clone the Machi source repo and compile the source and test code. Run +the following commands at your login shell: + + cd /tmp + git clone https://github.com/basho/machi.git + cd machi + git checkout master + make + +Then run the unit test suite. This may take up to two minutes or so +to finish. + + make test + +At the end, the test suite should report that all tests passed. + +If you had a test failure, a likely cause may be a limit on the number +of file descriptors available to your user process. (Recent releases +of OS X have a limit of 1024 file descriptors, which may be too slow.) +The output of the `limit -n` will tell you your file descriptor limit. + +## Running three Machi instances on a single machine + +Run the following command: + + make stagedevrel + +This will create a directory structure like this: + + |-dev1-|... stand-alone Machi app directories + |-dev-|-dev2-|... stand-alone Machi app directories + |-dev3-|... stand-alone Machi app directories + + # Using the network partition simulator and convergence demo test code +This is the demo code mentioned in the presentation that Scott Lystig +Fritchie gave at the +[RICON 2015 conference](http://ricon.io). +* [slides (PDF format)](http://ricon.io/speakers/slides/Scott_Fritchie_Ricon_2015.pdf) +* [video](https://www.youtube.com/watch?v=yR5kHL1bu1Q) + ## A complete example of all input and output If you don't have an Erlang/OTP 17 runtime environment available, @@ -15,31 +84,15 @@ To help interpret the output of the test, please skip ahead to the ## Prerequisites -1. You'll need the `git` source management -2. You'll need the Erlang/OTP 17 runtime environment. Please don't - use earlier or later versions until we have a chance to fix the - compilation warnings that versions R16B and 18 will trigger. - -All of the commands that should be run at your login shell (e.g. Bash, -c-shell) can be cut-and-pasted from this document directly to your -login shell prompt. +If you don't have `git` and/or the Erlang 17 runtime system available +on your OS X, FreeBSD, Linux, or Solaris machine, please take a look +at the [Prerequistes section](#prerequisites) first. When you have +installed the prerequisite software, please return back here. ## Clone and compile the code -Clone the Machi source repo and compile the source and test code. Run -the following commands at your login shell: - - cd /tmp - git clone https://github.com/basho/machi.git - cd machi - git checkout master - make - -Then run the unit test suite. This may take up to two minutes or so -to finish. Most of the tests will be silent; please be patient until -the tests finish. - - make test +Please briefly visit the [Clone and compile the code](#clone-compile) +section. When finished, please return back here. ## Run an interactive Erlang CLI shell diff --git a/rel/gen_dev b/rel/gen_dev new file mode 100755 index 0000000..1b8ce1b --- /dev/null +++ b/rel/gen_dev @@ -0,0 +1,16 @@ +#! /bin/sh +# +# Example usage: gen_dev dev4 vars.src vars +# +# Generate an overlay config for devNNN from vars.src and write to vars +# + +NAME=$1 +TEMPLATE=$2 +VARFILE=$3 + +NODE="$NAME@127.0.0.1" + +echo "Generating $NAME - node='$NODE'" +sed -e "s/@NODE@/$NODE/" \ + < $TEMPLATE > $VARFILE diff --git a/rel/vars.config b/rel/vars.config index 06b3aa0..b1bb405 100644 --- a/rel/vars.config +++ b/rel/vars.config @@ -1,6 +1,9 @@ %% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*- %% ex: ft=erlang ts=4 sw=4 et +%% NOTE: When modifying this file, also keep its near cousin +%% config file rel/vars/dev_vars.config.src in sync! + %% Platform-specific installation paths {platform_bin_dir, "./bin"}. {platform_data_dir, "./data"}. diff --git a/rel/vars/dev_vars.config.src b/rel/vars/dev_vars.config.src new file mode 100644 index 0000000..a5a3828 --- /dev/null +++ b/rel/vars/dev_vars.config.src @@ -0,0 +1,48 @@ +%% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*- +%% ex: ft=erlang ts=4 sw=4 et + +%% NOTE: When modifying this file, also keep its near cousin +%% config file rel/vars/dev_vars.config.src in sync! + +%% Platform-specific installation paths +{platform_bin_dir, "./bin"}. +{platform_data_dir, "./data"}. +{platform_etc_dir, "./etc"}. +{platform_lib_dir, "./lib"}. +{platform_log_dir, "./log"}. + +%% +%% etc/app.config +%% +{sasl_error_log, "{{platform_log_dir}}/sasl-error.log"}. +{sasl_log_dir, "{{platform_log_dir}}/sasl"}. + +%% lager +{console_log_default, file}. + +%% +%% etc/vm.args +%% +{node, "@NODE@"}. +{crash_dump, "{{platform_log_dir}}/erl_crash.dump"}. + +%% +%% bin/machi +%% +{runner_script_dir, "\`cd \\`dirname $0\\` 1>/dev/null && /bin/pwd\`"}. +{runner_base_dir, "{{runner_script_dir}}/.."}. +{runner_etc_dir, "$RUNNER_BASE_DIR/etc"}. +{runner_log_dir, "$RUNNER_BASE_DIR/log"}. +{runner_lib_dir, "$RUNNER_BASE_DIR/lib"}. +{runner_patch_dir, "$RUNNER_BASE_DIR/lib/basho-patches"}. +{pipe_dir, "/tmp/$RUNNER_BASE_DIR/"}. +{runner_user, ""}. +{runner_wait_process, "machi_flu_sup"}. +{runner_ulimit_warn, 65536}. + +%% +%% cuttlefish +%% +{cuttlefish, ""}. % blank = off +{cuttlefish_conf, "machi.conf"}. + -- 2.45.2 From 6c03f5c1a69342612de5c9be2da819b86a919c07 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 24 Feb 2016 15:08:41 +0900 Subject: [PATCH 40/53] Split out docs dev-clone-compile.md and dev-prerequisites.md --- README.md | 9 ++++--- doc/dev-clone-compile.md | 28 +++++++++++++++++++++ doc/dev-prerequisites.md | 18 ++++++++++++++ doc/humming_consensus_demo.md | 46 +++++++++-------------------------- 4 files changed, 63 insertions(+), 38 deletions(-) create mode 100644 doc/dev-clone-compile.md create mode 100644 doc/dev-prerequisites.md diff --git a/README.md b/README.md index 37db1e0..95e24a1 100644 --- a/README.md +++ b/README.md @@ -137,10 +137,13 @@ X. The only known limitations for using R16 are minor type specification difference between R16 and 17, but we strongly suggest continuing development using version 17. -We also assume that you have the standard UNIX/Linux developers -tool chain for C and C++ applications. Specifically, we assume `make` -is available. The utility used to compile the Machi source code, +We also assume that you have the standard UNIX/Linux developer +tool chain for C and C++ applications. Also, we assume +that Git and GNU Make are available. +The utility used to compile the Machi source code, `rebar`, is pre-compiled and included in the repo. +For more details, please see the +[Machi development environment prerequisites doc](./doc/dev-prerequisites.md). Machi has a dependency on the [ELevelDB](https://github.com/basho/eleveldb) library. ELevelDB only diff --git a/doc/dev-clone-compile.md b/doc/dev-clone-compile.md new file mode 100644 index 0000000..9795bb3 --- /dev/null +++ b/doc/dev-clone-compile.md @@ -0,0 +1,28 @@ +# Clone and compile Machi + +Clone the Machi source repo and compile the source and test code. Run +the following commands at your login shell: + + cd /tmp + git clone https://github.com/basho/machi.git + cd machi + git checkout master + make # or 'gmake' if GNU make uses an alternate name + +Then run the unit test suite. This may take up to two minutes or so +to finish. + + make test + +At the end, the test suite should report that all tests passed. + + [... many lines omitted ...] + module 'event_logger' + module 'chain_mgr_legacy' + ======================================================= + All 90 tests passed. + +If you had a test failure, a likely cause may be a limit on the number +of file descriptors available to your user process. (Recent releases +of OS X have a limit of 1024 file descriptors, which may be too slow.) +The output of the `limit -n` will tell you your file descriptor limit. diff --git a/doc/dev-prerequisites.md b/doc/dev-prerequisites.md new file mode 100644 index 0000000..b3987ad --- /dev/null +++ b/doc/dev-prerequisites.md @@ -0,0 +1,18 @@ +## Machi developer environment prerequisites + +1. Machi requires an OS X, FreeBSD, Linux, or Solaris machine is a + standard developer environment for C and C++ applications. +2. You'll need the `git` source management utility. +3. You'll need the Erlang/OTP 17 runtime environment. Please don't + use earlier or later versions until we have a chance to fix the + compilation warnings that versions R16B and 18 will trigger. + +For `git` and the Erlang runtime, please use your OS-specific +package manager to install these. If your package manager doesn't +have Erlang/OTP version 17 available, then we recommend using the +[precompiled packages available at Erlang Solutions](https://www.erlang-solutions.com/resources/download.html). + +Also, please verify that you have enough file descriptors available to +your user processes. The output of `ulimit -n` should report at least +4,000 file descriptors available. If your limit is lower (a frequent +problem for OS X users), please increase it to at least 4,000. diff --git a/doc/humming_consensus_demo.md b/doc/humming_consensus_demo.md index eb66ebe..1f0f3c5 100644 --- a/doc/humming_consensus_demo.md +++ b/doc/humming_consensus_demo.md @@ -7,50 +7,26 @@ # Hand-on experiments with Machi and Humming Consensus - ## Prerequisites -1. Machi requires a OS X, FreeBSD, Linux, or Solaris machine. -2. You'll need the `git` source management utility. -3. You'll need the Erlang/OTP 17 runtime environment. Please don't - use earlier or later versions until we have a chance to fix the - compilation warnings that versions R16B and 18 will trigger. - -For `git` and the Erlang runtime, please use your OS-specific -package manager to install these. If your package manager doesn't -have Erlang/OTP version 17 available, then we recommend using the -[precompiled packages available at Erlang Solutions](https://www.erlang-solutions.com/resources/download.html). - -All of the commands that should be run at your login shell (e.g. Bash, -c-shell) can be cut-and-pasted from this document directly to your -login shell prompt. +Please refer to the +[Machi development environment prerequisites doc](./doc/dev-prerequisites.md) +for Machi developer environment prerequisites. ## Clone and compile the code -Clone the Machi source repo and compile the source and test code. Run -the following commands at your login shell: - - cd /tmp - git clone https://github.com/basho/machi.git - cd machi - git checkout master - make - -Then run the unit test suite. This may take up to two minutes or so -to finish. - - make test - -At the end, the test suite should report that all tests passed. - -If you had a test failure, a likely cause may be a limit on the number -of file descriptors available to your user process. (Recent releases -of OS X have a limit of 1024 file descriptors, which may be too slow.) -The output of the `limit -n` will tell you your file descriptor limit. +Please see the +[Machi 'clone and compile' doc](./doc/dev-clone-compile.md) +for the short list of steps required to fetch the Machi source code +from GitHub and to compile & test Machi. ## Running three Machi instances on a single machine +All of the commands that should be run at your login shell (e.g. Bash, +c-shell) can be cut-and-pasted from this document directly to your +login shell prompt. + Run the following command: make stagedevrel -- 2.45.2 From bdf47da10cbce8812ba2f779ebd153b0305a2b71 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 24 Feb 2016 15:11:35 +0900 Subject: [PATCH 41/53] oops fix doc links --- .gitignore | 1 + README.md | 2 +- doc/humming_consensus_demo.md | 4 ++-- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/.gitignore b/.gitignore index ef440c4..c6a0bf2 100644 --- a/.gitignore +++ b/.gitignore @@ -4,6 +4,7 @@ prototype/chain-manager/patch.* deps dev erl_crash.dump +eqc .concrete/DEV_MODE .rebar edoc diff --git a/README.md b/README.md index 95e24a1..6e86eb9 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Machi: a robust & reliable, distributed, highly available, large file store +# Machi: a distributed, decentralized blob/large file store [Travis-CI](http://travis-ci.org/basho/machi) :: ![Travis-CI](https://secure.travis-ci.org/basho/machi.png) diff --git a/doc/humming_consensus_demo.md b/doc/humming_consensus_demo.md index 1f0f3c5..7d01fe7 100644 --- a/doc/humming_consensus_demo.md +++ b/doc/humming_consensus_demo.md @@ -10,14 +10,14 @@ ## Prerequisites Please refer to the -[Machi development environment prerequisites doc](./doc/dev-prerequisites.md) +[Machi development environment prerequisites doc](./dev-prerequisites.md) for Machi developer environment prerequisites. ## Clone and compile the code Please see the -[Machi 'clone and compile' doc](./doc/dev-clone-compile.md) +[Machi 'clone and compile' doc](./dev-clone-compile.md) for the short list of steps required to fetch the Machi source code from GitHub and to compile & test Machi. -- 2.45.2 From a3fbe2c8bbc3af192d91512bf1b3cd14967072db Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Thu, 25 Feb 2016 17:00:05 +0900 Subject: [PATCH 42/53] WIP: demo script writing, derp, need a shell script to simplify --- doc/dev-clone-compile.md | 4 +- doc/dev-prerequisites.md | 15 +++-- ...nsus_demo.md => humming-consensus-demo.md} | 66 ++++++++++++++++++- priv/quick-admin-examples/demo-000 | 7 ++ rel/reltool.config | 1 + 5 files changed, 83 insertions(+), 10 deletions(-) rename doc/{humming_consensus_demo.md => humming-consensus-demo.md} (78%) create mode 100644 priv/quick-admin-examples/demo-000 diff --git a/doc/dev-clone-compile.md b/doc/dev-clone-compile.md index 9795bb3..3ba78e1 100644 --- a/doc/dev-clone-compile.md +++ b/doc/dev-clone-compile.md @@ -14,7 +14,9 @@ to finish. make test -At the end, the test suite should report that all tests passed. +At the end, the test suite should report that all tests passed. The +actual number of tests shown in the "All `X` tests passed" line may be +different than the example below. [... many lines omitted ...] module 'event_logger' diff --git a/doc/dev-prerequisites.md b/doc/dev-prerequisites.md index b3987ad..8fa5b7a 100644 --- a/doc/dev-prerequisites.md +++ b/doc/dev-prerequisites.md @@ -1,15 +1,18 @@ ## Machi developer environment prerequisites -1. Machi requires an OS X, FreeBSD, Linux, or Solaris machine is a - standard developer environment for C and C++ applications. +1. Machi requires an 64-bit variant of UNIX: OS X, FreeBSD, Linux, or + Solaris machine is a standard developer environment for C and C++ + applications (64-bit versions). 2. You'll need the `git` source management utility. -3. You'll need the Erlang/OTP 17 runtime environment. Please don't - use earlier or later versions until we have a chance to fix the - compilation warnings that versions R16B and 18 will trigger. +3. You'll need the 64-bit Erlang/OTP 17 runtime environment. Please + don't use earlier or later versions until we have a chance to fix + the compilation warnings that versions R16B and 18 will trigger. + Also, please verify that you are not using a 32-bit Erlang/OTP + runtime package. For `git` and the Erlang runtime, please use your OS-specific package manager to install these. If your package manager doesn't -have Erlang/OTP version 17 available, then we recommend using the +have 64-bit Erlang/OTP version 17 available, then we recommend using the [precompiled packages available at Erlang Solutions](https://www.erlang-solutions.com/resources/download.html). Also, please verify that you have enough file descriptors available to diff --git a/doc/humming_consensus_demo.md b/doc/humming-consensus-demo.md similarity index 78% rename from doc/humming_consensus_demo.md rename to doc/humming-consensus-demo.md index 7d01fe7..e881de8 100644 --- a/doc/humming_consensus_demo.md +++ b/doc/humming-consensus-demo.md @@ -33,10 +33,70 @@ Run the following command: This will create a directory structure like this: - |-dev1-|... stand-alone Machi app directories - |-dev-|-dev2-|... stand-alone Machi app directories - |-dev3-|... stand-alone Machi app directories + |-dev1-|... stand-alone Machi app + subdirectories + |-dev-|-dev2-|... stand-alone Machi app + directories + |-dev3-|... stand-alone Machi app + directories +Each of the `dev/dev1`, `dev/dev2`, and `dev/dev3` are stand-alone +application instances of Machi and can be run independently of each +other on the same machine. This demo will use all three. + +The lifecycle management utilities for Machi are a bit immature, +currently. They assume that each Machi server runs on a host with a +unique hostname -- there is no flexibility built-in yet to easily run +multiple Machi instances on the same machine. To continue with the +demo, we need to use `sudo` or `su` to obtain superuser privileges to +edit the `/etc/hosts` file. + +Please add the following line to `/etc/hosts`, using this command: + + sudo sh -c 'echo "127.0.0.1 machi1 machi2 machi3" >> /etc/hosts' + +Then please verify that all three new hostnames for the localhost +network interface are working correctly: + + ping -c 1 machi1 ; ping -c 1 machi2 ; ping -c 1 machi3 + +If that worked, then we're ready for the next step: starting our three +Machi app instances on this machine, then configure a single chain to +to experiment with. + +Run the following commands to start the three Machi app instances and +use the `machi ping` command to verify that all three are running. + + sh -c 'for i in 1 2 3; do ./dev/dev$i/bin/machi start; done + sh -c 'for i in 1 2 3; do ./dev/dev$i/bin/machi ping; done + +The output from the `ping` commands should be: + + pong + pong + pong + +Next, use the following to configure a single chain: + + sh -c 'for i in 1 2 3; do ./dev/dev$i/bin/machi-admin + +The results should be: + + Result: ok + Result: ok + Result: ok + +We have now created a single replica chain, called `c1`, that has +three file servers participating in the chain. Thanks to the +hostnames that we added to `/etc/hosts`, all are using the localhost +network interface. + + | App instance | Hostname | FLU name | TCP port | + | directory | | | number | + |--------------+----------+----------+----------| + | dev1 | machi1 | flu1 | 20401 | + | dev2 | machi2 | flu2 | 20402 | + | dev3 | machi3 | flu3 | 20403 | + +The log files for each application instance can be found + # Using the network partition simulator and convergence demo test code diff --git a/priv/quick-admin-examples/demo-000 b/priv/quick-admin-examples/demo-000 new file mode 100644 index 0000000..301f348 --- /dev/null +++ b/priv/quick-admin-examples/demo-000 @@ -0,0 +1,7 @@ +{host, "machi1", []}. +{host, "machi2", []}. +{host, "machi3", []}. +{flu,f1,"machi1",20401,[]}. +{flu,f2,"machi2",20402,[]}. +{flu,f3,"machi3",20403,[]}. +{chain,c1,[f1,f2,f3],[]}. diff --git a/rel/reltool.config b/rel/reltool.config index 33df951..eb015be 100644 --- a/rel/reltool.config +++ b/rel/reltool.config @@ -106,6 +106,7 @@ {copy, "../priv/quick-admin-examples/000", "priv/quick-admin-examples"}, {copy, "../priv/quick-admin-examples/001", "priv/quick-admin-examples"}, {copy, "../priv/quick-admin-examples/002", "priv/quick-admin-examples"}, + {copy, "../priv/quick-admin-examples/demo-000", "priv/quick-admin-examples/demo-000"}, {mkdir, "lib/basho-patches"} %% {copy, "../apps/machi/ebin/etop_txt.beam", "lib/basho-patches"} -- 2.45.2 From f433e84fab2e8ac210b996d264c120d4561b5d0f Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Thu, 25 Feb 2016 17:52:40 +0900 Subject: [PATCH 43/53] Add 'stability_time' env var for repair --- src/machi_chain_manager1.erl | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/machi_chain_manager1.erl b/src/machi_chain_manager1.erl index 15b82ab..66b0163 100644 --- a/src/machi_chain_manager1.erl +++ b/src/machi_chain_manager1.erl @@ -2569,12 +2569,13 @@ perhaps_start_repair(#ch_mgr{name=MyName, %% RepairOpts = [{repair_mode, check}, verbose], RepairFun = fun() -> do_repair(S, RepairOpts, CMode) end, LastUPI = lists:last(UPI), + StabilityTime = application:get_env(machi, stability_time, ?REPAIR_START_STABILITY_TIME), IgnoreStabilityTime_p = proplists:get_value(ignore_stability_time, S#ch_mgr.opts, false), case timer:now_diff(os:timestamp(), Start) div 1000000 of N when MyName == LastUPI andalso (IgnoreStabilityTime_p orelse - N >= ?REPAIR_START_STABILITY_TIME) -> + N >= StabilityTime) -> {WorkerPid, _Ref} = spawn_monitor(RepairFun), S#ch_mgr{repair_worker=WorkerPid, repair_start=os:timestamp(), -- 2.45.2 From 4cb166368a1b05f181c9e963d5faf96d1c4cb56b Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Thu, 25 Feb 2016 18:10:11 +0900 Subject: [PATCH 44/53] priv/humming-consensus-demo.setup.sh debugged, all appears to work --- doc/humming-consensus-demo.md | 38 ++++++++++------------------------- 1 file changed, 11 insertions(+), 27 deletions(-) diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index e881de8..bf141cb 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -52,36 +52,20 @@ Please add the following line to `/etc/hosts`, using this command: sudo sh -c 'echo "127.0.0.1 machi1 machi2 machi3" >> /etc/hosts' -Then please verify that all three new hostnames for the localhost -network interface are working correctly: +Next, we will use a shell script to finish setting up our cluster. It +will do the following for us: - ping -c 1 machi1 ; ping -c 1 machi2 ; ping -c 1 machi3 +* Verify that the new line that was added to `/etc/hosts` is correct. +* Modify the `etc/app.config` files to configure the Humming Consensus + chain manager's actions logged to the `log/console.log` file. +* Start the three application instances. +* Verify that the three instances are running correctly. +* Configure a single chain, with one FLU server per application + instance. -If that worked, then we're ready for the next step: starting our three -Machi app instances on this machine, then configure a single chain to -to experiment with. +Please run this script using this command: -Run the following commands to start the three Machi app instances and -use the `machi ping` command to verify that all three are running. - - sh -c 'for i in 1 2 3; do ./dev/dev$i/bin/machi start; done - sh -c 'for i in 1 2 3; do ./dev/dev$i/bin/machi ping; done - -The output from the `ping` commands should be: - - pong - pong - pong - -Next, use the following to configure a single chain: - - sh -c 'for i in 1 2 3; do ./dev/dev$i/bin/machi-admin - -The results should be: - - Result: ok - Result: ok - Result: ok + ./priv/humming-consensus-demo.setup.sh We have now created a single replica chain, called `c1`, that has three file servers participating in the chain. Thanks to the -- 2.45.2 From 184a54ebbd5ae4ef46bec3a2d506757a2ae94872 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 26 Feb 2016 15:46:17 +0900 Subject: [PATCH 45/53] Change ?HYOGE blob size from 1GB -> 75MB to reduce RAM required for eunit tests --- test/machi_file_proxy_test.erl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/machi_file_proxy_test.erl b/test/machi_file_proxy_test.erl index 605abe7..10e16bf 100644 --- a/test/machi_file_proxy_test.erl +++ b/test/machi_file_proxy_test.erl @@ -38,7 +38,7 @@ clean_up_data_dir(DataDir) -> -ifndef(PULSE). -define(TESTDIR, "./t"). --define(HYOOGE, 1 * 1024 * 1024 * 1024). % 1 long GB +-define(HYOOGE, 75 * 1024 * 1024). % 75 MBytes random_binary_single() -> %% OK, I guess it's not that random... -- 2.45.2 From fc46cd1b25f0673b7cd7528577e472547b2eeb72 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Fri, 26 Feb 2016 17:32:51 +0900 Subject: [PATCH 46/53] WIP: Vagrant --- doc/dev-prerequisites.md | 17 ++++ doc/humming-consensus-demo.md | 9 ++- priv/humming-consensus-demo.setup.sh | 56 +++++++++++++ .../Vagrantfile | 81 +++++++++++++++++++ 4 files changed, 161 insertions(+), 2 deletions(-) create mode 100755 priv/humming-consensus-demo.setup.sh create mode 100644 priv/humming-consensus-demo.vagrant/Vagrantfile diff --git a/doc/dev-prerequisites.md b/doc/dev-prerequisites.md index 8fa5b7a..66afd41 100644 --- a/doc/dev-prerequisites.md +++ b/doc/dev-prerequisites.md @@ -19,3 +19,20 @@ Also, please verify that you have enough file descriptors available to your user processes. The output of `ulimit -n` should report at least 4,000 file descriptors available. If your limit is lower (a frequent problem for OS X users), please increase it to at least 4,000. + +# Using Vagrant to set up a developer environment for Machi + +The Machi source directory contains a `Vagrantfile` for creating an +Ubuntu Linux-based virtual machine for compiling and running Machi. +This file is in the +[$SRC_TOP/priv/humming-consensus-demo.vagrant](../priv/humming-consensus-demo.vagrant) +directory. + +If used as-is, the virtual machine specification is modest. + +* 1 virtual CPU +* 512MB virtual memory +* 768MB swap space +* 79GB sparse virtual disk image. After installing prerequisites and + compiling Machi, the root file system uses approximately 2.7 GBytes. + diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index bf141cb..198bc55 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -13,6 +13,11 @@ Please refer to the [Machi development environment prerequisites doc](./dev-prerequisites.md) for Machi developer environment prerequisites. +If you do not have an Erlang/OTP runtime system available, but you do +have [the Vagrant virtual machine](https://www.vagrantup.com/) manager +available, then please refer to the instructions in the prerequisites +doc for using Vagrant. + ## Clone and compile the code @@ -72,8 +77,8 @@ three file servers participating in the chain. Thanks to the hostnames that we added to `/etc/hosts`, all are using the localhost network interface. - | App instance | Hostname | FLU name | TCP port | - | directory | | | number | + | App instance | Pseudo | FLU name | TCP port | + | directory | Hostname | | number | |--------------+----------+----------+----------| | dev1 | machi1 | flu1 | 20401 | | dev2 | machi2 | flu2 | 20402 | diff --git a/priv/humming-consensus-demo.setup.sh b/priv/humming-consensus-demo.setup.sh new file mode 100755 index 0000000..dc57731 --- /dev/null +++ b/priv/humming-consensus-demo.setup.sh @@ -0,0 +1,56 @@ +#!/bin/sh + +echo "Step: Verify that the required entries in /etc/hosts are present" +for i in 1 2 3; do + grep machi$i /etc/hosts | egrep -s '^127.0.0.1' > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo "" + echo "'grep -s machi$i' failed. Aborting, sorry." + exit 1 + fi + ping -c 1 machi$i > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo "" + echo "Ping attempt on host machi$i failed. Aborting." + echo "" + ping -c 1 machi$i + exit 1 + fi +done + +echo "Step: add a verbose logging option to app.config" +for i in 1 2 3; do + ed ./dev/dev$i/etc/app.config < /dev/null 2>&1 +/verbose_confirm +a +{chain_manager_opts, [{private_write_verbose_confirm,true}]}, +{stability_time, 1}, +. +w +q +EOF +done + +echo "Step: start three three Machi application instances" +for i in 1 2 3; do + ./dev/dev$i/bin/machi start + ./dev/dev$i/bin/machi ping + if [ $? -ne 0 ]; then + echo "Sorry, a 'ping' check for instance dev$i failed. Aborting." + exit 1 + fi +done + +echo "Step: configure one chain to start a Humming Consensus group with three members" + +# Note: $CWD of each Machi proc is two levels below the source code root dir. +LIFECYCLE000=../../priv/quick-admin-examples/demo-000 +for i in 3 2 1; do + ./dev/dev$i/bin/machi-admin quick-admin-apply $LIFECYCLE000 machi$i + if [ $? -ne 0 ]; then + echo "Sorry, 'machi-admin quick-admin-apply failed' on machi$i. Aborting." + exit 1 + fi +done + +exit 0 diff --git a/priv/humming-consensus-demo.vagrant/Vagrantfile b/priv/humming-consensus-demo.vagrant/Vagrantfile new file mode 100644 index 0000000..8fe04a3 --- /dev/null +++ b/priv/humming-consensus-demo.vagrant/Vagrantfile @@ -0,0 +1,81 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# All Vagrant configuration is done below. The "2" in Vagrant.configure +# configures the configuration version (we support older styles for +# backwards compatibility). Please don't change it unless you know what +# you're doing. +Vagrant.configure(2) do |config| + # The most common configuration options are documented and commented below. + # For a complete reference, please see the online documentation at + # https://docs.vagrantup.com. + + # Every Vagrant development environment requires a box. You can search for + # boxes at https://atlas.hashicorp.com/search. + # If this Vagrant box has not been downloaded before (e.g. using "vagrant box add"), + # then Vagrant will automatically download the VM image from HashiCorp. + config.vm.box = "hashicorp/precise64" + + # Disable automatic box update checking. If you disable this, then + # boxes will only be checked for updates when the user runs + # `vagrant box outdated`. This is not recommended. + # config.vm.box_check_update = false + + # Create a forwarded port mapping which allows access to a specific port + # within the machine from a port on the host machine. In the example below, + # accessing "localhost:8080" will access port 80 on the guest machine. + # config.vm.network "forwarded_port", guest: 80, host: 8080 + + # Create a private network, which allows host-only access to the machine + # using a specific IP. + # config.vm.network "private_network", ip: "192.168.33.10" + + # Create a public network, which generally matched to bridged network. + # Bridged networks make the machine appear as another physical device on + # your network. + # config.vm.network "public_network" + + # Share an additional folder to the guest VM. The first argument is + # the path on the host to the actual folder. The second argument is + # the path on the guest to mount the folder. And the optional third + # argument is a set of non-required options. + # config.vm.synced_folder "../data", "/vagrant_data" + + # Provider-specific configuration so you can fine-tune various + # backing providers for Vagrant. These expose provider-specific options. + # Example for VirtualBox: + # + config.vm.provider "virtualbox" do |vb| + # Display the VirtualBox GUI when booting the machine + # vb.gui = true + + # Customize the amount of memory on the VM: + vb.memory = "512" + end + # + # View the documentation for the provider you are using for more + # information on available options. + + # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies + # such as FTP and Heroku are also available. See the documentation at + # https://docs.vagrantup.com/v2/push/atlas.html for more information. + # config.push.define "atlas" do |push| + # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME" + # end + + # Enable provisioning with a shell script. Additional provisioners such as + # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the + # documentation for more information about their specific syntax and use. + config.vm.provision "shell", inline: <<-SHELL + sudo apt-get install -y git + git clone https://github.com/slfritchie/slf-configurator.git + chown -R vagrant ./slf-configurator + (cd slf-configurator ; sudo sh -x ./ALL.sh) + echo 'export PATH=${PATH}:/usr/local/erlang/17.5/bin' >> ~vagrant/.bashrc + export PATH=${PATH}:/usr/local/erlang/17.5/bin + + git clone https://github.com/basho/machi.git + (cd machi ; git checkout master ; make test 2>&1 | tee RUNLOG.0) + chown -R vagrant ./machi + SHELL +end -- 2.45.2 From 84f522f865df75b238ca40d4d9862ed502093af1 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Sat, 27 Feb 2016 00:05:29 +0900 Subject: [PATCH 47/53] WIP: Vagrant --- doc/humming-consensus-demo.md | 4 +++- priv/humming-consensus-demo.vagrant/Vagrantfile | 16 ++++++++++++++-- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index 198bc55..01e10d1 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -84,7 +84,9 @@ network interface. | dev2 | machi2 | flu2 | 20402 | | dev3 | machi3 | flu3 | 20403 | -The log files for each application instance can be found +The log files for each application instance can be found in the +`./dev/devN/log/console.log` file, where the `N` is the instance +number: 1, 2, or 3. # Using the network partition simulator and convergence demo test code diff --git a/priv/humming-consensus-demo.vagrant/Vagrantfile b/priv/humming-consensus-demo.vagrant/Vagrantfile index 8fe04a3..187341b 100644 --- a/priv/humming-consensus-demo.vagrant/Vagrantfile +++ b/priv/humming-consensus-demo.vagrant/Vagrantfile @@ -15,6 +15,11 @@ Vagrant.configure(2) do |config| # If this Vagrant box has not been downloaded before (e.g. using "vagrant box add"), # then Vagrant will automatically download the VM image from HashiCorp. config.vm.box = "hashicorp/precise64" + # If using a FreeBSD box, Bash may not be installed. + # Use the config.ssh.shell setting to specify an alternate shell. + # Note, however, that any code in the 'config.vm.provision' section + # would then have to use this shell's syntax! + # config.ssh.shell = "/bin/csh -l" # Disable automatic box update checking. If you disable this, then # boxes will only be checked for updates when the user runs @@ -67,15 +72,22 @@ Vagrant.configure(2) do |config| # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the # documentation for more information about their specific syntax and use. config.vm.provision "shell", inline: <<-SHELL - sudo apt-get install -y git + # Install prerequsites + # Support here for FreeBSD is experimental + apt-get update ; sudo apt-get install -y git sudo rsync ; # Ubuntu Linux + env ASSUME_ALWAYS_YES=yes pkg install -f git sudo rsync ; # FreeBSD 10 + + # Install dependent packages, using slf-configurator git clone https://github.com/slfritchie/slf-configurator.git chown -R vagrant ./slf-configurator (cd slf-configurator ; sudo sh -x ./ALL.sh) echo 'export PATH=${PATH}:/usr/local/erlang/17.5/bin' >> ~vagrant/.bashrc export PATH=${PATH}:/usr/local/erlang/17.5/bin + ## echo 'set path = ( $path /usr/local/erlang/17.5/bin )' >> ~vagrant/.cshrc + ## setenv PATH /usr/local/erlang/17.5/bin:$PATH git clone https://github.com/basho/machi.git - (cd machi ; git checkout master ; make test 2>&1 | tee RUNLOG.0) + (cd machi ; git checkout master ; make test ) chown -R vagrant ./machi SHELL end -- 2.45.2 From 16153a5d31bb5d6bee6ec6ea5092bc69e820f209 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Sat, 27 Feb 2016 01:56:16 +0900 Subject: [PATCH 48/53] Fix deps building problem, silly --- priv/humming-consensus-demo.vagrant/Vagrantfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/priv/humming-consensus-demo.vagrant/Vagrantfile b/priv/humming-consensus-demo.vagrant/Vagrantfile index 187341b..ce0474d 100644 --- a/priv/humming-consensus-demo.vagrant/Vagrantfile +++ b/priv/humming-consensus-demo.vagrant/Vagrantfile @@ -87,7 +87,7 @@ Vagrant.configure(2) do |config| ## setenv PATH /usr/local/erlang/17.5/bin:$PATH git clone https://github.com/basho/machi.git - (cd machi ; git checkout master ; make test ) + (cd machi ; git checkout master ; make && make test ) chown -R vagrant ./machi SHELL end -- 2.45.2 From 4e5c16f5e2d3d8865a182976d4cb6f7d754f44d4 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 9 Mar 2016 10:30:23 -0800 Subject: [PATCH 49/53] WIP --- doc/humming-consensus-demo.md | 57 +++++++++++++++++++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index 01e10d1..d00637c 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -72,6 +72,20 @@ Please run this script using this command: ./priv/humming-consensus-demo.setup.sh +If the output looks like this (and exits with status zero), then the +script was successful. + + Step: Verify that the required entries in /etc/hosts are present + Step: add a verbose logging option to app.config + Step: start three three Machi application instances + pong + pong + pong + Step: configure one chain to start a Humming Consensus group with three members + Result: ok + Result: ok + Result: ok + We have now created a single replica chain, called `c1`, that has three file servers participating in the chain. Thanks to the hostnames that we added to `/etc/hosts`, all are using the localhost @@ -88,6 +102,49 @@ The log files for each application instance can be found in the `./dev/devN/log/console.log` file, where the `N` is the instance number: 1, 2, or 3. +## Understanding the chain manager's log file output + +After running the `./priv/humming-consensus-demo.setup.sh` script, +let's look at the last few lines of the `./dev/dev1/log/console.log` +log file for Erlang VM process #1. + + 2016-03-09 10:16:35.676 [info] <0.105.0>@machi_lifecycle_mgr:process_pending_flu:422 Started FLU f1 with supervisor pid <0.128.0> + 2016-03-09 10:16:35.676 [info] <0.105.0>@machi_lifecycle_mgr:move_to_flu_config:540 Creating FLU config file f1 + 2016-03-09 10:16:35.790 [info] <0.105.0>@machi_lifecycle_mgr:bootstrap_chain2:312 Configured chain c1 via FLU f1 to mode=ap_mode all=[f1,f2,f3] witnesses=[] + 2016-03-09 10:16:35.790 [info] <0.105.0>@machi_lifecycle_mgr:move_to_chain_config:546 Creating chain config file c1 + 2016-03-09 10:16:44.139 [info] <0.132.0> CONFIRM epoch 1141 <<155,42,7,221>> upi [] rep [] auth f1 by f1 + 2016-03-09 10:16:44.271 [info] <0.132.0> CONFIRM epoch 1148 <<57,213,154,16>> upi [f1] rep [] auth f1 by f1 + 2016-03-09 10:16:44.864 [info] <0.132.0> CONFIRM epoch 1151 <<239,29,39,70>> upi [f1] rep [f3] auth f1 by f1 + 2016-03-09 10:16:45.235 [info] <0.132.0> CONFIRM epoch 1152 <<173,17,66,225>> upi [f2] rep [f1,f3] auth f2 by f1 + 2016-03-09 10:16:47.343 [info] <0.132.0> CONFIRM epoch 1154 <<154,231,224,149>> upi [f2,f1,f3] rep [] auth f2 by f1 + +Let's pick apart some of these lines. + +* `Started FLU f1 with supervisor pid <0.128.0>` ; This VM, #1, + started a FLU (Machi data server) with the name `f1`. In the Erlang + process supervisor hierarchy, the process ID of the top supervisor + is `<0.128.0>`. +* `Configured chain c1 via FLU f1 to mode=ap_mode all=[f1,f2,f3] witnesses=[]` + A bootstrap configuration for a chain named `c1` has been created. + * The FLUs/data servers that are eligible for participation in the + chain have names `f1`, `f2`, and `f3`. + * The chain will operate in eventual consistency mode (`ap_mode`) + * The witness server list is empty. Witness servers are never used + in eventual consistency mode. +* `CONFIRM epoch 1141 <<155,42,7,221>> upi [] rep [] auth f1 by f1` + * All participants in epoch 1141 are unanimous in adopting epoch + 1141's projection. All active membership lists are empty, so + there is no functional chain replication yet, at least as far as + server `f1` knows + * The epoch's abbreviated checksum is `<<155,42,7,221>>`. + * The UPI list, i.e. the replicas whose data is 100% in sync is + `[]`, the empty list. (UPI = Update Propagation Invariant) + * The list of servers that are under data repair (`rep`) is also + empty, `[]`. + * This projection was authored by server `f1`. + * The log message was generated by server `f1`. + + # Using the network partition simulator and convergence demo test code -- 2.45.2 From cd166361aa61b156fbe344dea134ecc5aa67bfc6 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 9 Mar 2016 10:48:00 -0800 Subject: [PATCH 50/53] WIP --- doc/humming-consensus-demo.md | 53 +++++++++++++++++++++++++++++++++-- 1 file changed, 50 insertions(+), 3 deletions(-) diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index d00637c..5ef8e8d 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -118,14 +118,18 @@ log file for Erlang VM process #1. 2016-03-09 10:16:45.235 [info] <0.132.0> CONFIRM epoch 1152 <<173,17,66,225>> upi [f2] rep [f1,f3] auth f2 by f1 2016-03-09 10:16:47.343 [info] <0.132.0> CONFIRM epoch 1154 <<154,231,224,149>> upi [f2,f1,f3] rep [] auth f2 by f1 -Let's pick apart some of these lines. +Let's pick apart some of these lines. We have started all three +servers at about the same time. We see some race conditions happen, +and some jostling and readjustment happens pretty quickly in the first +few seconds. -* `Started FLU f1 with supervisor pid <0.128.0>` ; This VM, #1, +* `Started FLU f1 with supervisor pid <0.128.0>` + * This VM, #1, started a FLU (Machi data server) with the name `f1`. In the Erlang process supervisor hierarchy, the process ID of the top supervisor is `<0.128.0>`. * `Configured chain c1 via FLU f1 to mode=ap_mode all=[f1,f2,f3] witnesses=[]` - A bootstrap configuration for a chain named `c1` has been created. + * A bootstrap configuration for a chain named `c1` has been created. * The FLUs/data servers that are eligible for participation in the chain have names `f1`, `f2`, and `f3`. * The chain will operate in eventual consistency mode (`ap_mode`) @@ -143,6 +147,49 @@ Let's pick apart some of these lines. empty, `[]`. * This projection was authored by server `f1`. * The log message was generated by server `f1`. +* `CONFIRM epoch 1148 <<57,213,154,16>> upi [f1] rep [] auth f1 by f1` + * Now the server `f1` has created a chain of length 1, `[f1]`. + * Chain repair/file re-sync is not required when the UPI server list + changes from length 0 -> 1. +* `CONFIRM epoch 1151 <<239,29,39,70>> upi [f1] rep [f3] auth f1 by f1` + * Server `f1` has noticed that server `f3` is alive. Apparently it + has not yet noticed that server `f2` is also running. + * Server `f3` is in the repair list. +* `CONFIRM epoch 1152 <<173,17,66,225>> upi [f2] rep [f1,f3] auth f2 by f1` + * Server `f2` is apparently now aware that all three servers are running. + * The previous configuration used by `f2` was `upi [f2]`, i.e., `f2` + was running in a chain of one. `f2` noticed that `f1` and `f3` + were now available and has started adding them to the chain. + * All new servers are always added to the tail of the chain. + * In eventual consistency mode, a UPI change like this is OK. + * When performing a read, a client must read from both tail of the + UPI list and also from all repairing servers. + * When performing a write, the client writes to both the UPI + server list and also the repairing list, in that order. + * Server `f2` will trigger file repair/re-sync shortly. + * The waiting time for starting repair has been configured to be + extremely short, 1 second. The default waiting time is 10 + seconds, in case Humming Consensus remains unstable. +* `CONFIRM epoch 1154 <<154,231,224,149>> upi [f2,f1,f3] rep [] auth f2 by f1` + * File repair/re-sync has finished. All file data on all servers + are now in sync. + * The UPI/in-sync part of the chain is now `[f2,f1,f3]`, and there + are no servers under repair. + +## Let's create some failures + +Here are some suggestions for creating failures. + +* Use the `./dev/devN/bin/machi stop` and ``./dev/devN/bin/machi start` + commands to stop & start VM #`N`. +* Stop a VM abnormally by using `kill`. The OS process name to look + for is `beam.smp`. +* Suspend and resume a VM, using the `SIGSTOP` and `SIGCONT` signals. + * E.g. `kill -STOP 9823` and `kill -CONT 9823` + +The network partition simulator is not (yet) available when running +Machi in this mode. Please see the next section for instructions on +how to use partition simulator. -- 2.45.2 From 96c46ec5aa0610c566814c591a1c82022852fa09 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 9 Mar 2016 10:53:12 -0800 Subject: [PATCH 51/53] Add explanation for the 'CONFIRM' log messages --- doc/humming-consensus-demo.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index 5ef8e8d..ffed8bb 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -160,12 +160,15 @@ few seconds. * The previous configuration used by `f2` was `upi [f2]`, i.e., `f2` was running in a chain of one. `f2` noticed that `f1` and `f3` were now available and has started adding them to the chain. - * All new servers are always added to the tail of the chain. + * All new servers are always added to the tail of the chain in the + repair list. * In eventual consistency mode, a UPI change like this is OK. * When performing a read, a client must read from both tail of the UPI list and also from all repairing servers. * When performing a write, the client writes to both the UPI server list and also the repairing list, in that order. + * I.e., the client concatenates both lists, + `UPI ++ Repairing`, for its chain configuration for the write. * Server `f2` will trigger file repair/re-sync shortly. * The waiting time for starting repair has been configured to be extremely short, 1 second. The default waiting time is 10 @@ -180,7 +183,7 @@ few seconds. Here are some suggestions for creating failures. -* Use the `./dev/devN/bin/machi stop` and ``./dev/devN/bin/machi start` +* Use the `./dev/devN/bin/machi stop` and `./dev/devN/bin/machi start` commands to stop & start VM #`N`. * Stop a VM abnormally by using `kill`. The OS process name to look for is `beam.smp`. -- 2.45.2 From 6b000f6e7c32cfd4730af972f4aa8e23013a1950 Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 9 Mar 2016 11:14:43 -0800 Subject: [PATCH 52/53] Ignore +rel/vars/dev*vars.config --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index c6a0bf2..063a61d 100644 --- a/.gitignore +++ b/.gitignore @@ -22,6 +22,7 @@ include/machi_pb.hrl # Release packaging rel/machi +rel/vars/dev*vars.config # Misc Scott cruft *.patch -- 2.45.2 From fa71a918b882a2f29acbef9663e70b88cfde2e6f Mon Sep 17 00:00:00 2001 From: Scott Lystig Fritchie Date: Wed, 9 Mar 2016 12:12:34 -0800 Subject: [PATCH 53/53] README and FAQ updates for mid-March 2016 --- FAQ.md | 13 ++-- README.md | 110 ++++++++++++++++++++-------------- doc/humming-consensus-demo.md | 2 +- 3 files changed, 74 insertions(+), 51 deletions(-) diff --git a/FAQ.md b/FAQ.md index 6d43e8f..ee563c9 100644 --- a/FAQ.md +++ b/FAQ.md @@ -46,13 +46,13 @@ ### 1.1. What is Machi? -Very briefly, Machi is a very simple append-only file store. +Very briefly, Machi is a very simple append-only blob/file store. Machi is "dumber" than many other file stores (i.e., lacking many features found in other file stores) such as HadoopFS or a simple NFS or CIFS file server. -However, Machi is a distributed file store, which makes it different +However, Machi is a distributed blob/file store, which makes it different (and, in some ways, more complicated) than a simple NFS or CIFS file server. @@ -142,7 +142,8 @@ consistency mode during and after network partitions are: due to Machi's restrictions on file naming and file offset assignment. Both file names and file offsets are always chosen by Machi servers according to rules which guarantee safe - mergeability. + mergeability. Server-assigned names are a characteristic of a + "blob store". ### 1.5. What is Machi like when operating in "strongly consistent" mode? @@ -172,10 +173,10 @@ for more details. ### 1.6. What does Machi's API look like? The Machi API only contains a handful of API operations. The function -arguments shown below use Erlang-style type annotations. +arguments shown below (in simplifed form) use Erlang-style type annotations. - append_chunk(Prefix:binary(), Chunk:binary()). - append_chunk_extra(Prefix:binary(), Chunk:binary(), ExtraSpace:non_neg_integer()). + append_chunk(Prefix:binary(), Chunk:binary(), CheckSum:binary()). + append_chunk_extra(Prefix:binary(), Chunk:binary(), CheckSum:binary(), ExtraSpace:non_neg_integer()). read_chunk(File:binary(), Offset:non_neg_integer(), Size:non_neg_integer()). checksum_list(File:binary()). diff --git a/README.md b/README.md index 6e86eb9..25b9fff 100644 --- a/README.md +++ b/README.md @@ -4,16 +4,16 @@ Outline -1. [Why another file store?](#sec1) +1. [Why another blob/file store?](#sec1) 2. [Where to learn more about Machi](#sec2) 3. [Development status summary](#sec3) 4. [Contributing to Machi's development](#sec4) -## 1. Why another file store? +## 1. Why another blob/file store? Our goal is a robust & reliable, distributed, highly available, large -file store. Such stores already exist, both in the open source world +file and blob store. Such stores already exist, both in the open source world and in the commercial world. Why reinvent the wheel? We believe there are three reasons, ordered by decreasing rarity. @@ -25,9 +25,8 @@ there are three reasons, ordered by decreasing rarity. 3. We want to manage file replicas in a way that's provably correct and also easy to test. -Of all the file stores in the open source & commercial worlds, only -criteria #3 is a viable option. Or so we hope. Or we just don't -care, and if data gets lost or corrupted, then ... so be it. +Criteria #3 is difficult to find in the open source world but perhaps +not impossible. If we have app use cases where availability is more important than consistency, then systems that meet criteria #2 are also rare. @@ -39,12 +38,13 @@ file data and attempts best-effort file reads? If we really do care about data loss and/or data corruption, then we really want both #3 and #1. Unfortunately, systems that meet -criteria #1 are _very rare_. +criteria #1 are _very rare_. (Nonexistant?) Why? This is 2015. We have decades of research that shows that computer hardware can (and indeed does) corrupt data at nearly every level of the modern client/server application stack. Systems with end-to-end data corruption detection should be ubiquitous today. Alas, they are not. + Machi is an effort to change the deplorable state of the world, one Erlang function at a time. @@ -70,46 +70,62 @@ including the network partition simulator. ## 3. Development status summary -Mid-December 2015: work is underway. +Mid-March 2016: The Machi development team has been downsized in +recent months, and the pace of development has slowed. Here is a +summary of the status of Machi's major components. -* In progress: - * Code refactoring: metadata management using - [ELevelDB](https://github.com/basho/eleveldb) - * File repair using file-centric, Merkle-style hash tree. - * Server-side socket handling is now performed by - [ranch](https://github.com/ninenines/ranch) - * QuickCheck tests for file repair correctness - * 2015-12-15: The EUnit test `machi_ap_repair_eqc` is - currently failing occasionally because it (correctly) detects - double-write errors. Double-write errors will be eliminated - when the ELevelDB integration work is complete. - * The `make stage` and `make release` commands can be used to - create a primitive "package". Use `./rel/machi/bin/machi console` - to start the Machi app in interactive mode. Substitute the word - `start` instead of console to start Machi in background/daemon - mode. The `./rel/machi/bin/machi` command without any arguments - will give a short usage summary. - * Chain Replication management using the Humming Consensus - algorithm to manage chain state is stable. - * ... with the caveat that it runs very well in a very harsh - and unforgiving network partition simulator but has not run - much yet in the real world. - * All Machi client/server protocols are based on - [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview). - * The current specification for Machi's protocols can be found at - [https://github.com/basho/machi/blob/master/src/machi.proto](https://github.com/basho/machi/blob/master/src/machi.proto). - * The Machi PB protocol is not yet stable. Expect change! - * The Erlang language client implementation of the high-level - protocol flavor is brittle (e.g., little error handling yet). +* Humming Consensus and the chain manager + * No new safety bugs have been found by model-checking tests. + * A new document, + (Hand-on experiments with Machi and Humming Consensus)[doc/humming-consensus-demo.md] + is now available. It is a tutorial for setting up a 3 virtual + machine Machi cluster and how to demonstrate the chain manager's + reactions to server stops & starts, crashes & restarts, and pauses + (simulated by `SIGSTOP` and `SIGCONT`). + * The chain manager can still make suboptimal-but-safe choices for + chain transitions when a server hangs/pauses temporarily. + * Recent chain manager changes have made the instability window + much shorter when the slow/paused server resumes execution. + * Scott believes that a modest change to the chain manager's + calculation of a new projection can reduce flapping in this (and + many other cases) less likely. Currently, the new local + projection is calculated using only local state (i.e., the chain + manager's internal state + the fitness server's state). + However, if the "latest" projection read from the public + projection stores were also input to the new projection + calculation function, then many obviously bad projections can be + avoided without needing rounds of Humming Consensus to + demonstrate that a bad projection is bad. -If you would like to run the Humming Consensus code (with or without -the network partition simulator) as described in the RICON 2015 -presentation, please see the -[Humming Consensus demo doc.](./doc/humming_consensus_demo.md). +* FLU/data server process + * All known correctness bugs have been fixed. + * Performance has not yet been measured. Performance measurement + and enhancements are scheduled to start in the middle of March 2016. + (This will include a much-needed update to the `basho_bench` driver.) -If you'd like to work on a protocol such as Thrift, UBF, -msgpack over UDP, or some other protocol, let us know by -[opening an issue to discuss it](./issues/new). +* Access protocols and client libraries + * The protocol used by both external clients and internally (instead + of using Erlang's native message passing mechanisms) is based on + Protocol Buffers. + * (Machi PB protocol specification: ./src/machi.proto)[./src/machi.proto] + * At the moment, the PB specification contains two protocols. + Sometime in the near future, the spec will be split to separate + the external client API (the "high" protocol) from the internal + communication API (the "low" protocol). + +* Recent conference talks about Machi + * Erlang Factory San Francisco 2016 + (the slides and video recording)[http://www.erlang-factory.com/sfbay2016/scott-lystig-fritchie] + will be available a few weeks after the conference ends on March + 11, 2016. + * Ricon 2015 + * (The slides)[http://ricon.io/archive/2015/slides/Scott_Fritchie_Ricon_2015.pdf] + * and the (video recording)[https://www.youtube.com/watch?v=yR5kHL1bu1Q&index=13&list=PL9Jh2HsAWHxIc7Tt2M6xez_TOP21GBH6M] + are now available. + * If you would like to run the Humming Consensus code (with or without + the network partition simulator) as described in the RICON 2015 + presentation, please see the + [Humming Consensus demo doc](./doc/humming_consensus_demo.md). ## 4. Contributing to Machi's development @@ -150,3 +166,9 @@ Machi has a dependency on the supports UNIX/Linux OSes and 64-bit versions of Erlang/OTP only; we apologize to Windows-based and 32-bit-based Erlang developers for this restriction. + +### 4.3 New protocols and features + +If you'd like to work on a protocol such as Thrift, UBF, +msgpack over UDP, or some other protocol, let us know by +[opening an issue to discuss it](./issues/new). diff --git a/doc/humming-consensus-demo.md b/doc/humming-consensus-demo.md index ffed8bb..f92858f 100644 --- a/doc/humming-consensus-demo.md +++ b/doc/humming-consensus-demo.md @@ -220,7 +220,7 @@ To help interpret the output of the test, please skip ahead to the If you don't have `git` and/or the Erlang 17 runtime system available on your OS X, FreeBSD, Linux, or Solaris machine, please take a look -at the [Prerequistes section](#prerequisites) first. When you have +at the [Prerequisites section](#prerequisites) first. When you have installed the prerequisite software, please return back here. ## Clone and compile the code -- 2.45.2