Compare commits

..

No commits in common. "master" and "slf/travis-ci2" have entirely different histories.

97 changed files with 4222 additions and 10367 deletions

14
.gitignore vendored
View file

@ -2,32 +2,20 @@ prototype/chain-manager/patch.*
.eqc-info .eqc-info
.eunit .eunit
deps deps
dev *.plt
erl_crash.dump erl_crash.dump
eqc
.concrete/DEV_MODE .concrete/DEV_MODE
.rebar .rebar
edoc edoc
# Dialyzer stuff
.dialyzer-last-run.txt
.ebin.native
.local_dialyzer_plt
dialyzer_unhandled_warnings
dialyzer_warnings
*.plt
# PB artifacts for Erlang # PB artifacts for Erlang
include/machi_pb.hrl include/machi_pb.hrl
# Release packaging # Release packaging
rel/machi rel/machi
rel/vars/dev*vars.config
# Misc Scott cruft # Misc Scott cruft
*.patch *.patch
current_counterexample.eqc current_counterexample.eqc
foo* foo*
RUNLOG*
typescript* typescript*
*.swp

View file

@ -4,4 +4,3 @@ notifications:
script: "priv/test-for-gh-pr.sh" script: "priv/test-for-gh-pr.sh"
otp_release: otp_release:
- 17.5 - 17.5
## No, Dialyzer is too different between 17 & 18: - 18.1

230
FAQ.md
View file

@ -11,14 +11,14 @@
+ [1 Questions about Machi in general](#n1) + [1 Questions about Machi in general](#n1)
+ [1.1 What is Machi?](#n1.1) + [1.1 What is Machi?](#n1.1)
+ [1.2 What is a Machi chain?](#n1.2) + [1.2 What is a Machi "cluster of clusters"?](#n1.2)
+ [1.3 What is a Machi cluster?](#n1.3) + [1.2.1 This "cluster of clusters" idea needs a better name, don't you agree?](#n1.2.1)
+ [1.4 What is Machi like when operating in "eventually consistent" mode?](#n1.4) + [1.3 What is Machi like when operating in "eventually consistent"/"AP mode"?](#n1.3)
+ [1.5 What is Machi like when operating in "strongly consistent" mode?](#n1.5) + [1.4 What is Machi like when operating in "strongly consistent"/"CP mode"?](#n1.4)
+ [1.6 What does Machi's API look like?](#n1.6) + [1.5 What does Machi's API look like?](#n1.5)
+ [1.7 What licensing terms are used by Machi?](#n1.7) + [1.6 What licensing terms are used by Machi?](#n1.6)
+ [1.8 Where can I find the Machi source code and documentation? Can I contribute?](#n1.8) + [1.7 Where can I find the Machi source code and documentation? Can I contribute?](#n1.7)
+ [1.9 What is Machi's expected release schedule, packaging, and operating system/OS distribution support?](#n1.9) + [1.8 What is Machi's expected release schedule, packaging, and operating system/OS distribution support?](#n1.8)
+ [2 Questions about Machi relative to {{something else}}](#n2) + [2 Questions about Machi relative to {{something else}}](#n2)
+ [2.1 How is Machi better than Hadoop?](#n2.1) + [2.1 How is Machi better than Hadoop?](#n2.1)
+ [2.2 How does Machi differ from HadoopFS/HDFS?](#n2.2) + [2.2 How does Machi differ from HadoopFS/HDFS?](#n2.2)
@ -28,15 +28,13 @@
+ [3 Machi's specifics](#n3) + [3 Machi's specifics](#n3)
+ [3.1 What technique is used to replicate Machi's files? Can other techniques be used?](#n3.1) + [3.1 What technique is used to replicate Machi's files? Can other techniques be used?](#n3.1)
+ [3.2 Does Machi have a reliance on a coordination service such as ZooKeeper or etcd?](#n3.2) + [3.2 Does Machi have a reliance on a coordination service such as ZooKeeper or etcd?](#n3.2)
+ [3.3 Are there any presentations available about Humming Consensus](#n3.3) + [3.3 Is it true that there's an allegory written to describe humming consensus?](#n3.3)
+ [3.4 Is it true that there's an allegory written to describe Humming Consensus?](#n3.4) + [3.4 How is Machi tested?](#n3.4)
+ [3.5 How is Machi tested?](#n3.5) + [3.5 Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks](#n3.5)
+ [3.6 Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks](#n3.6) + [3.6 Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device?](#n3.6)
+ [3.7 Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device?](#n3.7) + [3.7 What language(s) is Machi written in?](#n3.7)
+ [3.8 What language(s) is Machi written in?](#n3.8) + [3.8 Does Machi use the Erlang/OTP network distribution system (aka "disterl")?](#n3.8)
+ [3.9 Can Machi run on Windows? Can Machi run on 32-bit platforms?](#n3.9) + [3.9 Can I use HTTP to write/read stuff into/from Machi?](#n3.9)
+ [3.10 Does Machi use the Erlang/OTP network distribution system (aka "disterl")?](#n3.10)
+ [3.11 Can I use HTTP to write/read stuff into/from Machi?](#n3.11)
<!-- ENDOUTLINE --> <!-- ENDOUTLINE -->
@ -46,13 +44,13 @@
<a name="n1.1"> <a name="n1.1">
### 1.1. What is Machi? ### 1.1. What is Machi?
Very briefly, Machi is a very simple append-only blob/file store. Very briefly, Machi is a very simple append-only file store.
Machi is Machi is
"dumber" than many other file stores (i.e., lacking many features "dumber" than many other file stores (i.e., lacking many features
found in other file stores) such as HadoopFS or a simple NFS or CIFS file found in other file stores) such as HadoopFS or simple NFS or CIFS file
server. server.
However, Machi is a distributed blob/file store, which makes it different However, Machi is a distributed file store, which makes it different
(and, in some ways, more complicated) than a simple NFS or CIFS file (and, in some ways, more complicated) than a simple NFS or CIFS file
server. server.
@ -84,39 +82,45 @@ For a much longer answer, please see the
[Machi high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-machi.pdf). [Machi high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-machi.pdf).
<a name="n1.2"> <a name="n1.2">
### 1.2. What is a Machi chain? ### 1.2. What is a Machi "cluster of clusters"?
A Machi chain is a small number of machines that maintain a common set Machi's design is based on using small, well-understood and provable
of replicated files. A typical chain is of length 2 or 3. For (mathematically) techniques to maintain multiple file copies without
critical data that must be available despite several simultaneous data loss or data corruption. At its lowest level, Machi contains no
server failures, a chain length of 6 or 7 might be used. support for distribution/partitioning/sharding of files across many
servers. A typical, fully-functional Machi cluster will likely be two
or three machines.
<a name="n1.3"> However, Machi is designed to be an excellent building block for
### 1.3. What is a Machi cluster? building larger systems. A deployment of Machi "cluster of clusters"
will use the "random slicing" technique for partitioning files across
multiple Machi clusters that, as individuals, are unaware of the
larger cluster-of-clusters scheme.
A Machi cluster is a collection of Machi chains that The cluster-of-clusters management service will be fully decentralized
partitions/shards/distributes files (based on file name) across the
collection of chains. Machi uses the "random slicing" algorithm (a
variation of consistent hashing) to define the mapping of file name to
chain name.
The cluster management service will be fully decentralized
and run as a separate software service installed on each Machi and run as a separate software service installed on each Machi
cluster. This manager will appear to the local Machi server as simply cluster. This manager will appear to the local Machi server as simply
another Machi file client. The cluster managers will take another Machi file client. The cluster-of-clusters managers will take
care of file migration as the cluster grows and shrinks in capacity care of file migration as the cluster grows and shrinks in capacity
and in response to day-to-day changes in workload. and in response to day-to-day changes in workload.
Though the cluster manager has not yet been implemented, Though the cluster-of-clusters manager has not yet been implemented,
its design is fully decentralized and capable of operating despite its design is fully decentralized and capable of operating despite
multiple partial failure of its member chains. We expect this multiple partial failure of its member clusters. We expect this
design to scale easily to at least one thousand servers. design to scale easily to at least one thousand servers.
Please see the Please see the
[Machi source repository's 'doc' directory for more details](https://github.com/basho/machi/tree/master/doc/). [Machi source repository's 'doc' directory for more details](https://github.com/basho/machi/tree/master/doc/).
<a name="n1.4"> <a name="n1.2.1">
### 1.4. What is Machi like when operating in "eventually consistent" mode? #### 1.2.1. This "cluster of clusters" idea needs a better name, don't you agree?
Yes. Please help us: we are bad at naming things.
For proof that naming things is hard, see
[http://martinfowler.com/bliki/TwoHardThings.html](http://martinfowler.com/bliki/TwoHardThings.html)
<a name="n1.3">
### 1.3. What is Machi like when operating in "eventually consistent"/"AP mode"?
Machi's operating mode dictates how a Machi cluster will react to Machi's operating mode dictates how a Machi cluster will react to
network partitions. A network partition may be caused by: network partitions. A network partition may be caused by:
@ -126,30 +130,37 @@ network partitions. A network partition may be caused by:
* An extreme server software "hang" or "pause", e.g. caused by OS * An extreme server software "hang" or "pause", e.g. caused by OS
scheduling problems such as a failing/stuttering disk device. scheduling problems such as a failing/stuttering disk device.
The consistency semantics of file operations while in eventual "AP mode" refers to the "A" and "P" properties of the "CAP
consistency mode during and after network partitions are: conjecture", meaning that the cluster will be "Available" and
"Partition tolerant".
The consistency semantics of file operations while in "AP mode" are
eventually consistent during and after network partitions:
* File write operations are permitted by any client on the "same side" * File write operations are permitted by any client on the "same side"
of the network partition. of the network partition.
* File read operations are successful for any file contents where the * File read operations are successful for any file contents where the
client & server are on the "same side" of the network partition. client & server are on the "same side" of the network partition.
* File read operations will probably fail for any file contents where the
client & server are on "different sides" of the network partition.
* After the network partition(s) is resolved, files are merged * After the network partition(s) is resolved, files are merged
together from "all sides" of the partition(s). together from "all sides" of the partition(s).
* Unique files are copied in their entirety. * Unique files are copied in their entirety.
* Byte ranges within the same file are merged. This is possible * Byte ranges within the same file are merged. This is possible
due to Machi's restrictions on file naming and file offset due to Machi's restrictions on file naming (files names are
assignment. Both file names and file offsets are always chosen alwoys assigned by Machi servers) and file offset assignments
by Machi servers according to rules which guarantee safe (byte offsets are also always chosen by Machi servers according
mergeability. Server-assigned names are a characteristic of a to rules which guarantee safe mergeability.).
"blob store".
<a name="n1.5"> <a name="n1.4">
### 1.5. What is Machi like when operating in "strongly consistent" mode? ### 1.4. What is Machi like when operating in "strongly consistent"/"CP mode"?
The consistency semantics of file operations while in strongly Machi's operating mode dictates how a Machi cluster will react to
consistency mode during and after network partitions are: network partitions.
"CP mode" refers to the "C" and "P" properties of the "CAP
conjecture", meaning that the cluster will be "Consistent" and
"Partition tolerant".
The consistency semantics of file operations while in "CP mode" are
strongly consistent during and after network partitions:
* File write operations are permitted by any client on the "same side" * File write operations are permitted by any client on the "same side"
of the network partition if and only if a quorum majority of Machi servers of the network partition if and only if a quorum majority of Machi servers
@ -164,19 +175,19 @@ consistency mode during and after network partitions are:
Machi's design can provide the illusion of quorum minority write Machi's design can provide the illusion of quorum minority write
availability if the cluster is configured to operate with "witness availability if the cluster is configured to operate with "witness
servers". (This feaure partially implemented, as of December 2015.) servers". (This feaure is not implemented yet, as of June 2015.)
See Section 11 of See Section 11 of
[Machi chain manager high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-chain-mgr.pdf) [Machi chain manager high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-chain-mgr.pdf)
for more details. for more details.
<a name="n1.6"> <a name="n1.5">
### 1.6. What does Machi's API look like? ### 1.5. What does Machi's API look like?
The Machi API only contains a handful of API operations. The function The Machi API only contains a handful of API operations. The function
arguments shown below (in simplifed form) use Erlang-style type annotations. arguments shown below use Erlang-style type annotations.
append_chunk(Prefix:binary(), Chunk:binary(), CheckSum:binary()). append_chunk(Prefix:binary(), Chunk:binary()).
append_chunk_extra(Prefix:binary(), Chunk:binary(), CheckSum:binary(), ExtraSpace:non_neg_integer()). append_chunk_extra(Prefix:binary(), Chunk:binary(), ExtraSpace:non_neg_integer()).
read_chunk(File:binary(), Offset:non_neg_integer(), Size:non_neg_integer()). read_chunk(File:binary(), Offset:non_neg_integer(), Size:non_neg_integer()).
checksum_list(File:binary()). checksum_list(File:binary()).
@ -194,22 +205,17 @@ Internally, there is a more complex protocol used by individual
cluster members to manage file contents and to repair damaged/missing cluster members to manage file contents and to repair damaged/missing
files. See Figure 3 in files. See Figure 3 in
[Machi high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-machi.pdf) [Machi high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-machi.pdf)
for more description. for more details.
The definitions of both the "high level" external protocol and "low <a name="n1.6">
level" internal protocol are in a ### 1.6. What licensing terms are used by Machi?
[Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview)
definition at [./src/machi.proto](./src/machi.proto).
<a name="n1.7">
### 1.7. What licensing terms are used by Machi?
All Machi source code and documentation is licensed by All Machi source code and documentation is licensed by
[Basho Technologies, Inc.](http://www.basho.com/) [Basho Technologies, Inc.](http://www.basho.com/)
under the [Apache Public License version 2](https://github.com/basho/machi/tree/master/LICENSE). under the [Apache Public License version 2](https://github.com/basho/machi/tree/master/LICENSE).
<a name="n1.8"> <a name="n1.7">
### 1.8. Where can I find the Machi source code and documentation? Can I contribute? ### 1.7. Where can I find the Machi source code and documentation? Can I contribute?
All Machi source code and documentation can be found at GitHub: All Machi source code and documentation can be found at GitHub:
[https://github.com/basho/machi](https://github.com/basho/machi). [https://github.com/basho/machi](https://github.com/basho/machi).
@ -223,11 +229,11 @@ ideas for improvement, please see our contributing & collaboration
guidelines at guidelines at
[https://github.com/basho/machi/blob/master/CONTRIBUTING.md](https://github.com/basho/machi/blob/master/CONTRIBUTING.md). [https://github.com/basho/machi/blob/master/CONTRIBUTING.md](https://github.com/basho/machi/blob/master/CONTRIBUTING.md).
<a name="n1.9"> <a name="n1.8">
### 1.9. What is Machi's expected release schedule, packaging, and operating system/OS distribution support? ### 1.8. What is Machi's expected release schedule, packaging, and operating system/OS distribution support?
Basho expects that Machi's first major product release will take place Basho expects that Machi's first release will take place near the end
during the 2nd quarter of 2016. of calendar year 2015.
Basho's official support for operating systems (e.g. Linux, FreeBSD), Basho's official support for operating systems (e.g. Linux, FreeBSD),
operating system packaging (e.g. CentOS rpm/yum package management, operating system packaging (e.g. CentOS rpm/yum package management,
@ -302,15 +308,15 @@ file's writable phase).
<tr> <tr>
<td> Does not have any file distribution/partitioning/sharding across <td> Does not have any file distribution/partitioning/sharding across
Machi chains: in a single Machi chain, all files are replicated by Machi clusters: in a single Machi cluster, all files are replicated by
all servers in the chain. The "random slicing" technique is used all servers in the cluster. The "cluster of clusters" concept is used
to distribute/partition/shard files across multiple Machi clusters. to distribute/partition/shard files across multiple Machi clusters.
<td> File distribution/partitioning/sharding is performed <td> File distribution/partitioning/sharding is performed
automatically by the HDFS "name node". automatically by the HDFS "name node".
<tr> <tr>
<td> Machi requires no central "name node" for single chain use or <td> Machi requires no central "name node" for single cluster use.
for multi-chain cluster use. Machi requires no central "name node" for "cluster of clusters" use
<td> Requires a single "namenode" server to maintain file system contents <td> Requires a single "namenode" server to maintain file system contents
and file content mapping. (May be deployed with a "secondary and file content mapping. (May be deployed with a "secondary
namenode" to reduce unavailability when the primary namenode fails.) namenode" to reduce unavailability when the primary namenode fails.)
@ -476,8 +482,8 @@ difficult to adapt to Machi's design goals:
* Both protocols use quorum majority consensus, which requires a * Both protocols use quorum majority consensus, which requires a
minimum of *2F + 1* working servers to tolerate *F* failures. For minimum of *2F + 1* working servers to tolerate *F* failures. For
example, to tolerate 2 server failures, quorum majority protocols example, to tolerate 2 server failures, quorum majority protocols
require a minimum of 5 servers. To tolerate the same number of require a minium of 5 servers. To tolerate the same number of
failures, Chain Replication requires a minimum of only 3 servers. failures, Chain replication requires only 3 servers.
* Machi's use of "humming consensus" to manage internal server * Machi's use of "humming consensus" to manage internal server
metadata state would also (probably) require conversion to Paxos or metadata state would also (probably) require conversion to Paxos or
Raft. (Or "outsourced" to a service such as ZooKeeper.) Raft. (Or "outsourced" to a service such as ZooKeeper.)
@ -494,17 +500,7 @@ Humming consensus is described in the
[Machi chain manager high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-chain-mgr.pdf). [Machi chain manager high level design doc](https://github.com/basho/machi/tree/master/doc/high-level-chain-mgr.pdf).
<a name="n3.3"> <a name="n3.3">
### 3.3. Are there any presentations available about Humming Consensus ### 3.3. Is it true that there's an allegory written to describe humming consensus?
Scott recently (November 2015) gave a presentation at the
[RICON 2015 conference](http://ricon.io) about one of the techniques
used by Machi; "Managing Chain Replication Metadata with
Humming Consensus" is available online now.
* [slides (PDF format)](http://ricon.io/speakers/slides/Scott_Fritchie_Ricon_2015.pdf)
* [video](https://www.youtube.com/watch?v=yR5kHL1bu1Q)
<a name="n3.4">
### 3.4. Is it true that there's an allegory written to describe Humming Consensus?
Yes. In homage to Leslie Lamport's original paper about the Paxos Yes. In homage to Leslie Lamport's original paper about the Paxos
protocol, "The Part-time Parliamant", there is an allegorical story protocol, "The Part-time Parliamant", there is an allegorical story
@ -515,8 +511,8 @@ The full story, full of wonder and mystery, is called
There is also a There is also a
[short followup blog posting](http://www.snookles.com/slf-blog/2015/03/20/on-humming-consensus-an-allegory-part-2/). [short followup blog posting](http://www.snookles.com/slf-blog/2015/03/20/on-humming-consensus-an-allegory-part-2/).
<a name="n3.5"> <a name="n3.4">
### 3.5. How is Machi tested? ### 3.4. How is Machi tested?
While not formally proven yet, Machi's implementation of Chain While not formally proven yet, Machi's implementation of Chain
Replication and of humming consensus have been extensively tested with Replication and of humming consensus have been extensively tested with
@ -541,20 +537,16 @@ change several times during any single test case) and a random series
of cluster operations, an event trace of all cluster activity is used of cluster operations, an event trace of all cluster activity is used
to verify that no safety-critical rules have been violated. to verify that no safety-critical rules have been violated.
All test code is available in the [./test](./test) subdirectory. <a name="n3.5">
Modules that use QuickCheck will use a file suffix of `_eqc`, for ### 3.5. Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks
example, [./test/machi_ap_repair_eqc.erl](./test/machi_ap_repair_eqc.erl).
<a name="n3.6">
### 3.6. Does Machi require shared disk storage? e.g. iSCSI, NBD (Network Block Device), Fibre Channel disks
No, Machi's design assumes that each Machi server is a fully No, Machi's design assumes that each Machi server is a fully
independent hardware and assumes only standard local disks (Winchester independent hardware and assumes only standard local disks (Winchester
and/or SSD style) with local-only interfaces (e.g. SATA, SCSI, PCI) in and/or SSD style) with local-only interfaces (e.g. SATA, SCSI, PCI) in
each machine. each machine.
<a name="n3.7"> <a name="n3.6">
### 3.7. Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device? ### 3.6. Does Machi require or assume that servers with large numbers of disks must use RAID-0/1/5/6/10/50/60 to create a single block device?
No. When used with servers with multiple disks, the intent is to No. When used with servers with multiple disks, the intent is to
deploy multiple Machi servers per machine: one Machi server per disk. deploy multiple Machi servers per machine: one Machi server per disk.
@ -572,13 +564,10 @@ deploy multiple Machi servers per machine: one Machi server per disk.
placement relative to 12 servers is smaller than a placement problem placement relative to 12 servers is smaller than a placement problem
of managing 264 seprate disks (if each of 12 servers has 22 disks). of managing 264 seprate disks (if each of 12 servers has 22 disks).
<a name="n3.8"> <a name="n3.7">
### 3.8. What language(s) is Machi written in? ### 3.7. What language(s) is Machi written in?
So far, Machi is written in Erlang, mostly. Machi uses at least one So far, Machi is written in 100% Erlang.
library, [ELevelDB](https://github.com/basho/eleveldb), that is
implemented both in C++ and in Erlang, using Erlang NIFs (Native
Interface Functions) to allow Erlang code to call C++ functions.
In the event that we encounter a performance problem that cannot be In the event that we encounter a performance problem that cannot be
solved within the Erlang/OTP runtime environment, all of Machi's solved within the Erlang/OTP runtime environment, all of Machi's
@ -587,16 +576,8 @@ in C, Java, or other "gotta go fast fast FAST!!" programming
language. We expect that the Chain Replication manager and other language. We expect that the Chain Replication manager and other
critical "control plane" software will remain in Erlang. critical "control plane" software will remain in Erlang.
<a name="n3.9"> <a name="n3.8">
### 3.9. Can Machi run on Windows? Can Machi run on 32-bit platforms? ### 3.8. Does Machi use the Erlang/OTP network distribution system (aka "disterl")?
The ELevelDB NIF does not compile or run correctly on Erlang/OTP
Windows platforms, nor does it compile correctly on 32-bit platforms.
Machi should support all 64-bit UNIX-like platforms that are supported
by Erlang/OTP and ELevelDB.
<a name="n3.10">
### 3.10. Does Machi use the Erlang/OTP network distribution system (aka "disterl")?
No, Machi doesn't use Erlang/OTP's built-in distributed message No, Machi doesn't use Erlang/OTP's built-in distributed message
passing system. The code would be *much* simpler if we did use passing system. The code would be *much* simpler if we did use
@ -607,16 +588,19 @@ bit-twiddling magicSPEED ... without also having to find a replacement
for disterl. (Or without having to re-invent disterl's features in for disterl. (Or without having to re-invent disterl's features in
another language.) another language.)
All wire protocols used by Machi are defined & implemented using <a name="artisanal-protocol">
[Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview). In the first drafts of the Machi code, the inter-node communication
The definition file can be found at [./src/machi.proto](./src/machi.proto). uses a hand-crafted, artisanal, mostly ASCII protocol as part of a
"demo day" quick & dirty prototype. Work is underway (summer of 2015)
to replace that protocol gradually with a well-structured,
well-documented protocol based on Protocol Buffers data serialization.
<a name="n3.11"> <a name="n3.9">
### 3.11. Can I use HTTP to write/read stuff into/from Machi? ### 3.9. Can I use HTTP to write/read stuff into/from Machi?
Short answer: No, not yet. Yes, sort of. For as long as the legacy of
Machi's first internal protocol & code still
Longer answer: No, but it was possible as a hack, many months ago, see survives, it's possible to use a
[primitive/hack'y HTTP interface that is described in this source code commit log](https://github.com/basho/machi/commit/6cebf397232cba8e63c5c9a0a8c02ba391b20fef). [primitive/hack'y HTTP interface that is described in this source code commit log](https://github.com/basho/machi/commit/6cebf397232cba8e63c5c9a0a8c02ba391b20fef).
Please note that commit `6cebf397232cba8e63c5c9a0a8c02ba391b20fef` is Please note that commit `6cebf397232cba8e63c5c9a0a8c02ba391b20fef` is
required to try using this feature: the code has since bit-rotted and required to try using this feature: the code has since bit-rotted and

View file

@ -8,9 +8,8 @@ ifeq ($(REBAR),)
REBAR = $(BASE_DIR)/rebar REBAR = $(BASE_DIR)/rebar
endif endif
OVERLAY_VARS ?= OVERLAY_VARS ?=
EUNIT_OPTS = -v
.PHONY: rel stagedevrel deps package pkgclean edoc .PHONY: rel deps package pkgclean edoc
all: deps compile all: deps compile
@ -35,6 +34,11 @@ deps:
clean: clean:
$(REBAR) -r clean $(REBAR) -r clean
test: deps compile eunit
eunit:
$(REBAR) -v skip_deps=true eunit
edoc: edoc-clean edoc: edoc-clean
$(REBAR) skip_deps=true doc $(REBAR) skip_deps=true doc
@ -46,6 +50,28 @@ pulse: compile
#env USE_PULSE=1 $(REBAR) skip_deps=true clean compile #env USE_PULSE=1 $(REBAR) skip_deps=true clean compile
#env USE_PULSE=1 $(REBAR) skip_deps=true -D PULSE eunit -v #env USE_PULSE=1 $(REBAR) skip_deps=true -D PULSE eunit -v
APPS = kernel stdlib sasl erts ssl compiler eunit crypto
PLT = $(HOME)/.machi_dialyzer_plt
build_plt: deps compile
dialyzer --build_plt --output_plt $(PLT) --apps $(APPS) deps/*/ebin
DIALYZER_DEP_APPS = ebin/machi_pb.beam deps/protobuffs/ebin
DIALYZER_FLAGS = -Wno_return -Wrace_conditions -Wunderspecs
dialyzer: deps compile
dialyzer $(DIALYZER_FLAGS) --plt $(PLT) ebin $(DIALYZER_DEP_APPS) | \
egrep -v -f ./filter-dialyzer-dep-warnings
dialyzer-test: deps compile
echo Force rebar to recompile .eunit dir w/o running tests > /dev/null
rebar skip_deps=true eunit suite=lamport_clock
dialyzer $(DIALYZER_FLAGS) --plt $(PLT) .eunit $(DIALYZER_DEP_APPS) | \
egrep -v -f ./filter-dialyzer-dep-warnings
clean_plt:
rm $(PLT)
## ##
## Release targets ## Release targets
## ##
@ -56,39 +82,3 @@ relclean:
stage : rel stage : rel
$(foreach dep,$(wildcard deps/*), rm -rf rel/$(REPO)/lib/$(shell basename $(dep))* && ln -sf $(abspath $(dep)) rel/$(REPO)/lib;) $(foreach dep,$(wildcard deps/*), rm -rf rel/$(REPO)/lib/$(shell basename $(dep))* && ln -sf $(abspath $(dep)) rel/$(REPO)/lib;)
##
## Developer targets
##
## devN - Make a dev build for node N
## stagedevN - Make a stage dev build for node N (symlink libraries)
## devrel - Make a dev build for 1..$DEVNODES
## stagedevrel Make a stagedev build for 1..$DEVNODES
##
## Example, make a 68 node devrel cluster
## make stagedevrel DEVNODES=68
.PHONY : stagedevrel devrel
DEVNODES ?= 3
# 'seq' is not available on all *BSD, so using an alternate in awk
SEQ = $(shell awk 'BEGIN { for (i = 1; i < '$(DEVNODES)'; i++) printf("%i ", i); print i ;exit(0);}')
$(eval stagedevrel : $(foreach n,$(SEQ),stagedev$(n)))
$(eval devrel : $(foreach n,$(SEQ),dev$(n)))
dev% : all
mkdir -p dev
rel/gen_dev $@ rel/vars/dev_vars.config.src rel/vars/$@_vars.config
(cd rel && ../rebar generate target_dir=../dev/$@ overlay_vars=vars/$@_vars.config)
stagedev% : dev%
$(foreach dep,$(wildcard deps/*), rm -rf dev/$^/lib/$(shell basename $(dep))* && ln -sf $(abspath $(dep)) dev/$^/lib;)
devclean: clean
rm -rf dev
DIALYZER_APPS = kernel stdlib sasl erts ssl compiler eunit crypto public_key syntax_tools
PLT = $(HOME)/.machi_dialyzer_plt
include tools.mk

225
README.md
View file

@ -1,136 +1,61 @@
# Machi: a distributed, decentralized blob/large file store # Machi
[Travis-CI](http://travis-ci.org/basho/machi) :: ![Travis-CI](https://secure.travis-ci.org/basho/machi.png) [Travis-CI](http://travis-ci.org/basho/machi) :: ![Travis-CI](https://secure.travis-ci.org/basho/machi.png)
Outline Our goal is a robust & reliable, distributed, highly available(*),
large file store based upon write-once registers, append-only files,
Chain Replication, and client-server style architecture. All members
of the cluster store all of the files. Distributed load
balancing/sharding of files is __outside__ of the scope of this
system. However, it is a high priority that this system be able to
integrate easily into systems that do provide distributed load
balancing, e.g., Riak Core. Although strong consistency is a major
feature of Chain Replication, first use cases will focus mainly on
eventual consistency features --- strong consistency design will be
discussed in a separate design document (read more below).
1. [Why another blob/file store?](#sec1) The ability for Machi to maintain strong consistency will make it
2. [Where to learn more about Machi](#sec2) attractive as a toolkit for building things like CORFU and Tango as
3. [Development status summary](#sec3) well as better-known open source software such as Kafka's file
4. [Contributing to Machi's development](#sec4) replication. (See the bibliography of the [Machi high level design
doc](./doc/high-level-machi.pdf) for further references.)
<a name="sec1"> (*) Capable of operating in "AP mode" or "CP mode" relative to the
## 1. Why another blob/file store? CAP Theorem.
Our goal is a robust & reliable, distributed, highly available, large ## Status: mid-June 2015: work is underway
file and blob store. Such stores already exist, both in the open source world
and in the commercial world. Why reinvent the wheel? We believe
there are three reasons, ordered by decreasing rarity.
1. We want end-to-end checksums for all file data, from the initial The two major design documents for Machi are now ready or nearly ready
file writer to every file reader, anywhere, all the time. for internal Basho and external party review. Please see the
2. We need flexibility to trade consistency for availability: [doc](./doc) directory's [README](./doc) for details
e.g. weak consistency in exchange for being available in cases
of partial system failure.
3. We want to manage file replicas in a way that's provably correct
and also easy to test.
Criteria #3 is difficult to find in the open source world but perhaps * Machi high level design
not impossible. * Machi chain self-management design
If we have app use cases where availability is more important than The work of implementing first draft of Machi is now underway. The
consistency, then systems that meet criteria #2 are also rare. code from the [prototype/demo-day-hack](prototype/demo-day-hack/) directory is
Most file stores provide only strong consistency and therefore being used as the initial scaffolding.
have unavoidable, unavailable behavior when parts of the system
fail.
What if we want a file store that is always available to write new
file data and attempts best-effort file reads?
If we really do care about data loss and/or data corruption, then we * The chain manager is ready for "AP mode" use in eventual
really want both #3 and #1. Unfortunately, systems that meet consistency use cases.
criteria #1 are _very rare_. (Nonexistant?)
Why? This is 2015. We have decades of research that shows
that computer hardware can (and
indeed does) corrupt data at nearly every level of the modern
client/server application stack. Systems with end-to-end data
corruption detection should be ubiquitous today. Alas, they are not.
Machi is an effort to change the deplorable state of the world, one * All Machi client/server protocols are based on
Erlang function at a time. [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview).
* The current specification for Machi's protocols can be found at
[https://github.com/basho/machi/blob/master/src/machi.proto](https://github.com/basho/machi/blob/master/src/machi.proto).
* The Machi PB protocol is not yet stable. Expect change!
* The Erlang language client implementation of the high-level
protocol flavor is very brittle (e.g., very little error
handling yet).
* The Erlang language client implementation of the low-level
protocol flavor are still a work-in-progress ... but they are
more robust than the high-level library's implementation.
<a name="sec2"> If you'd like to work on a protocol such as Thrift, UBF,
## 2. Where to learn more about Machi msgpack over UDP, or some other protocol, let us know by
[opening an issue](./issues/new) to discuss it.
The two major design documents for Machi are now mostly stable. ## Contributing to Machi: source code, documentation, etc.
Please see the [doc](./doc) directory's [README](./doc) for details.
We also have a
[Frequently Asked Questions (FAQ) list](./FAQ.md).
Scott recently (November 2015) gave a presentation at the
[RICON 2015 conference](http://ricon.io) about one of the techniques
used by Machi; "Managing Chain Replication Metadata with
Humming Consensus" is available online now.
* [slides (PDF format)](http://ricon.io/speakers/slides/Scott_Fritchie_Ricon_2015.pdf)
* [video](https://www.youtube.com/watch?v=yR5kHL1bu1Q)
See later in this document for how to run the Humming Consensus demos,
including the network partition simulator.
<a name="sec3">
## 3. Development status summary
Mid-March 2016: The Machi development team has been downsized in
recent months, and the pace of development has slowed. Here is a
summary of the status of Machi's major components.
* Humming Consensus and the chain manager
* No new safety bugs have been found by model-checking tests.
* A new document,
[Hands-on experiments with Machi and Humming Consensus](doc/humming-consensus-demo.md)
is now available. It is a tutorial for setting up a 3 virtual
machine Machi cluster and how to demonstrate the chain manager's
reactions to server stops & starts, crashes & restarts, and pauses
(simulated by `SIGSTOP` and `SIGCONT`).
* The chain manager can still make suboptimal-but-safe choices for
chain transitions when a server hangs/pauses temporarily.
* Recent chain manager changes have made the instability window
much shorter when the slow/paused server resumes execution.
* Scott believes that a modest change to the chain manager's
calculation of a new projection can reduce flapping in this (and
many other cases) less likely. Currently, the new local
projection is calculated using only local state (i.e., the chain
manager's internal state + the fitness server's state).
However, if the "latest" projection read from the public
projection stores were also input to the new projection
calculation function, then many obviously bad projections can be
avoided without needing rounds of Humming Consensus to
demonstrate that a bad projection is bad.
* FLU/data server process
* All known correctness bugs have been fixed.
* Performance has not yet been measured. Performance measurement
and enhancements are scheduled to start in the middle of March 2016.
(This will include a much-needed update to the `basho_bench` driver.)
* Access protocols and client libraries
* The protocol used by both external clients and internally (instead
of using Erlang's native message passing mechanisms) is based on
Protocol Buffers.
* (Machi PB protocol specification: ./src/machi.proto)[./src/machi.proto]
* At the moment, the PB specification contains two protocols.
Sometime in the near future, the spec will be split to separate
the external client API (the "high" protocol) from the internal
communication API (the "low" protocol).
* Recent conference talks about Machi
* Erlang Factory San Francisco 2016
[the slides and video recording](http://www.erlang-factory.com/sfbay2016/scott-lystig-fritchie)
will be available a few weeks after the conference ends on March
11, 2016.
* Ricon 2015
* [The slides](http://ricon.io/archive/2015/slides/Scott_Fritchie_Ricon_2015.pdf)
* and the [video recording](https://www.youtube.com/watch?v=yR5kHL1bu1Q&index=13&list=PL9Jh2HsAWHxIc7Tt2M6xez_TOP21GBH6M)
are now available.
* If you would like to run the Humming Consensus code (with or without
the network partition simulator) as described in the RICON 2015
presentation, please see the
[Humming Consensus demo doc](./doc/humming_consensus_demo.md).
<a name="sec4">
## 4. Contributing to Machi's development
### 4.1 License
Basho Technologies, Inc. as committed to licensing all work for Machi Basho Technologies, Inc. as committed to licensing all work for Machi
under the under the
@ -146,29 +71,57 @@ We invite all contributors to review the
[CONTRIBUTING.md](./CONTRIBUTING.md) document for guidelines for [CONTRIBUTING.md](./CONTRIBUTING.md) document for guidelines for
working with the Basho development team. working with the Basho development team.
### 4.2 Development environment requirements ## A brief survey of this directories in this repository
* A list of Frequently Asked Questions, a.k.a.
[the Machi FAQ](./FAQ.md).
* The [doc](./doc/) directory: home for major documents about Machi:
high level design documents as well as exploration of features still
under design & review within Basho.
* The `ebin` directory: used for compiled application code
* The `include`, `src`, and `test` directories: contain the header
files, source files, and test code for Machi, respectively.
* The [prototype](./prototype/) directory: contains proof of concept
code, scaffolding libraries, and other exploratory code. Curious
readers should see the [prototype/README.md](./prototype/README.md)
file for more explanation of the small sub-projects found here.
## Development environment requirements
All development to date has been done with Erlang/OTP version 17 on OS All development to date has been done with Erlang/OTP version 17 on OS
X. The only known limitations for using R16 are minor type X. The only known limitations for using R16 are minor type
specification difference between R16 and 17, but we strongly suggest specification difference between R16 and 17, but we strongly suggest
continuing development using version 17. continuing development using version 17.
We also assume that you have the standard UNIX/Linux developer We also assume that you have the standard UNIX/Linux developers
tool chain for C and C++ applications. Also, we assume tool chain for C and C++ applications. Specifically, we assume `make`
that Git and GNU Make are available. is available. The utility used to compile the Machi source code,
The utility used to compile the Machi source code,
`rebar`, is pre-compiled and included in the repo. `rebar`, is pre-compiled and included in the repo.
For more details, please see the
[Machi development environment prerequisites doc](./doc/dev-prerequisites.md).
Machi has a dependency on the There are no known OS limits at this time: any platform that supports
[ELevelDB](https://github.com/basho/eleveldb) library. ELevelDB only Erlang/OTP should be sufficient for Machi. This may change over time
supports UNIX/Linux OSes and 64-bit versions of Erlang/OTP only; we (e.g., adding NIFs which can make full portability to Windows OTP
apologize to Windows-based and 32-bit-based Erlang developers for this environments difficult), but it hasn't happened yet.
restriction.
### 4.3 New protocols and features ## Contributions
Basho encourages contributions to Riak from the community. Heres how
to get started.
* Fork the appropriate sub-projects that are affected by your change.
* Create a topic branch for your change and checkout that branch.
git checkout -b some-topic-branch
* Make your changes and run the test suite if one is provided. (see below)
* Commit your changes and push them to your fork.
* Open pull-requests for the appropriate projects.
* Contributors will review your pull request, suggest changes, and merge it when its ready and/or offer feedback.
* To report a bug or issue, please open a new issue against this repository.
-The Machi team at Basho,
[Scott Lystig Fritchie](mailto:scott@basho.com), technical lead, and
[Matt Brender](mailto:mbrender@basho.com), your developer advocate.
If you'd like to work on a protocol such as Thrift, UBF,
msgpack over UDP, or some other protocol, let us know by
[opening an issue to discuss it](./issues/new).

View file

@ -1,15 +0,0 @@
### The auto-generated code of machi_pb.beam has some complaints, not fixed yet.
machi_pb.erl:0:
##################################################
######## Specific types #####################
##################################################
Unknown types:
basho_bench_config:get/2
machi_partition_simulator:get/1
hamcrest:matchspec/0
##################################################
######## Specific messages #####################
##################################################
machi_chain_manager1.erl:2473: The created fun has no local return
machi_chain_manager1.erl:2184: The pattern <_P1, P2, Else = {'expected_author2', UPI1_tail, _}> can never match the type <#projection_v1{epoch_number::'undefined' | non_neg_integer(),epoch_csum::'undefined' | binary(),author_server::atom(),chain_name::atom(),all_members::'undefined' | [atom()],witnesses::[atom()],creation_time::'undefined' | {non_neg_integer(),non_neg_integer(),non_neg_integer()},mode::'ap_mode' | 'cp_mode',upi::'undefined' | [atom()],repairing::'undefined' | [atom()],down::'undefined' | [atom()],dbg::'undefined' | [any()],dbg2::'undefined' | [any()],members_dict::'undefined' | [{_,_}]},#projection_v1{epoch_number::'undefined' | non_neg_integer(),epoch_csum::binary(),author_server::atom(),chain_name::atom(),all_members::'undefined' | [atom()],witnesses::[atom()],creation_time::'undefined' | {non_neg_integer(),non_neg_integer(),non_neg_integer()},mode::'ap_mode' | 'cp_mode',upi::'undefined' | [atom()],repairing::'undefined' | [atom()],down::'undefined' | [atom()],dbg::'undefined' | [any()],dbg2::'undefined' | [any()],members_dict::'undefined' | [{_,_}]},'true'>
machi_chain_manager1.erl:2233: The pattern <_P1 = {'projection_v1', _, _, _, _, _, _, _, 'cp_mode', UPI1, Repairing1, _, _, _, _}, _P2 = {'projection_v1', _, _, _, _, _, _, _, 'cp_mode', UPI2, Repairing2, _, _, _, _}, Else = {'epoch_not_si', EpochX, 'not_gt', EpochY}> can never match the type <#projection_v1{epoch_number::'undefined' | non_neg_integer(),epoch_csum::'undefined' | binary(),author_server::atom(),chain_name::atom(),all_members::'undefined' | [atom()],witnesses::[atom()],creation_time::'undefined' | {non_neg_integer(),non_neg_integer(),non_neg_integer()},mode::'ap_mode' | 'cp_mode',upi::'undefined' | [atom()],repairing::'undefined' | [atom()],down::'undefined' | [atom()],dbg::'undefined' | [any()],dbg2::'undefined' | [any()],members_dict::'undefined' | [{_,_}]},#projection_v1{epoch_number::'undefined' | non_neg_integer(),epoch_csum::binary(),author_server::atom(),chain_name::atom(),all_members::'undefined' | [atom()],witnesses::[atom()],creation_time::'undefined' | {non_neg_integer(),non_neg_integer(),non_neg_integer()},mode::'ap_mode' | 'cp_mode',upi::'undefined' | [atom()],repairing::'undefined' | [atom()],down::'undefined' | [atom()],dbg::'undefined' | [any()],dbg2::'undefined' | [any()],members_dict::'undefined' | [{_,_}]},'true'>

View file

@ -6,6 +6,20 @@ Erlang documentation, please use this link:
## Documents in this directory ## Documents in this directory
### chain-self-management-sketch.org
[chain-self-management-sketch.org](chain-self-management-sketch.org)
is a mostly-deprecated draft of
an introduction to the
self-management algorithm proposed for Machi. Most material has been
moved to the [high-level-chain-mgr.pdf](high-level-chain-mgr.pdf) document.
### cluster-of-clusters (directory)
This directory contains the sketch of the "cluster of clusters" design
strawman for partitioning/distributing/sharding files across a large
number of independent Machi clusters.
### high-level-machi.pdf ### high-level-machi.pdf
[high-level-machi.pdf](high-level-machi.pdf) [high-level-machi.pdf](high-level-machi.pdf)
@ -36,9 +50,9 @@ introduction to the Humming Consensus algorithm. Its abstract:
> of file updates to all replica servers in a Machi cluster. Chain > of file updates to all replica servers in a Machi cluster. Chain
> Replication is a variation of primary/backup replication where the > Replication is a variation of primary/backup replication where the
> order of updates between the primary server and each of the backup > order of updates between the primary server and each of the backup
> servers is strictly ordered into a single "chain". Management of > servers is strictly ordered into a single ``chain''. Management of
> Chain Replication's metadata, e.g., "What is the current order of > Chain Replication's metadata, e.g., ``What is the current order of
> servers in the chain?", remains an open research problem. The > servers in the chain?'', remains an open research problem. The
> current state of the art for Chain Replication metadata management > current state of the art for Chain Replication metadata management
> relies on an external oracle (e.g., ZooKeeper) or the Elastic > relies on an external oracle (e.g., ZooKeeper) or the Elastic
> Replication algorithm. > Replication algorithm.
@ -46,7 +60,7 @@ introduction to the Humming Consensus algorithm. Its abstract:
> This document describes the Machi chain manager, the component > This document describes the Machi chain manager, the component
> responsible for managing Chain Replication metadata state. The chain > responsible for managing Chain Replication metadata state. The chain
> manager uses a new technique, based on a variation of CORFU, called > manager uses a new technique, based on a variation of CORFU, called
> "humming consensus". > ``humming consensus''.
> Humming consensus does not require active participation by all or even > Humming consensus does not require active participation by all or even
> a majority of participants to make decisions. Machi's chain manager > a majority of participants to make decisions. Machi's chain manager
> bases its logic on humming consensus to make decisions about how to > bases its logic on humming consensus to make decisions about how to
@ -57,18 +71,3 @@ introduction to the Humming Consensus algorithm. Its abstract:
> decision during that epoch. When a differing decision is discovered, > decision during that epoch. When a differing decision is discovered,
> new time epochs are proposed in which a new consensus is reached and > new time epochs are proposed in which a new consensus is reached and
> disseminated to all available participants. > disseminated to all available participants.
### chain-self-management-sketch.org
[chain-self-management-sketch.org](chain-self-management-sketch.org)
is a mostly-deprecated draft of
an introduction to the
self-management algorithm proposed for Machi. Most material has been
moved to the [high-level-chain-mgr.pdf](high-level-chain-mgr.pdf) document.
### cluster (directory)
This directory contains the sketch of the cluster design
strawman for partitioning/distributing/sharding files across a large
number of independent Machi chains.

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

View file

@ -0,0 +1,435 @@
-*- mode: org; -*-
#+TITLE: Machi cluster-of-clusters "name game" sketch
#+AUTHOR: Scott
#+STARTUP: lognotedone hidestars indent showall inlineimages
#+SEQ_TODO: TODO WORKING WAITING DONE
#+COMMENT: M-x visual-line-mode
#+COMMENT: Also, disable auto-fill-mode
* 1. "Name Games" with random-slicing style consistent hashing
Our goal: to distribute lots of files very evenly across a cluster of
Machi clusters (hereafter called a "cluster of clusters" or "CoC").
* 2. Assumptions
** Basic familiarity with Machi high level design and Machi's "projection"
The [[https://github.com/basho/machi/blob/master/doc/high-level-machi.pdf][Machi high level design document]] contains all of the basic
background assumed by the rest of this document.
** Familiarity with the Machi cluster-of-clusters/CoC concept
This isn't yet well-defined (April 2015). However, it's clear from
the [[https://github.com/basho/machi/blob/master/doc/high-level-machi.pdf][Machi high level design document]] that Machi alone does not support
any kind of file partitioning/distribution/sharding across multiple
small Machi clusters. There must be another layer above a Machi cluster to
provide such partitioning services.
The name "cluster of clusters" orignated within Basho to avoid
conflicting use of the word "cluster". A Machi cluster is usually
synonymous with a single Chain Replication chain and a single set of
machines (e.g. 2-5 machines). However, in the not-so-far future, we
expect much more complicated patterns of Chain Replication to be used
in real-world deployments.
"Cluster of clusters" is clunky and long, but we haven't found a good
substitute yet. If you have a good suggestion, please contact us!
~^_^~
Using the [[https://github.com/basho/machi/tree/master/prototype/demo-day-hack][cluster-of-clusters quick-and-dirty prototype]] as an
architecture sketch, let's now assume that we have ~N~ independent Machi
clusters. We wish to provide partitioned/distributed file storage
across all ~N~ clusters. We call the entire collection of ~N~ Machi
clusters a "cluster of clusters", or abbreviated "CoC".
** Continue CoC prototype's assumption: a Machi cluster is unaware of CoC
Let's continue with an assumption that an individual Machi cluster
inside of the cluster-of-clusters is completely unaware of the
cluster-of-clusters layer.
We may need to break this assumption sometime in the future? It isn't
quite clear yet, sorry.
** Analogy: "neighborhood : city :: Machi : cluster-of-clusters"
Analogy: The word "machi" in Japanese means small town or
neighborhood. As the Tokyo Metropolitan Area is built from many
machis and smaller cities, therefore a big, partitioned file store can
be built out of many small Machi clusters.
** The reader is familiar with the random slicing technique
I'd done something very-very-nearly-identical for the Hibari database
6 years ago. But the Hibari technique was based on stuff I did at
Sendmail, Inc, so it felt old news to me. {shrug}
The Hibari documentation has a brief photo illustration of how random
slicing works, see [[http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration][Hibari Sysadmin Guide, chain migration]]
For a comprehensive description, please see these two papers:
#+BEGIN_QUOTE
Reliable and Randomized Data Distribution Strategies for Large Scale Storage Systems
Alberto Miranda et al.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.226.5609
(short version, HIPC'11)
Random Slicing: Efficient and Scalable Data Placement for Large-Scale
Storage Systems
Alberto Miranda et al.
DOI: http://dx.doi.org/10.1145/2632230 (long version, ACM Transactions
on Storage, Vol. 10, No. 3, Article 9, 2014)
#+END_QUOTE
** We use random slicing to map CoC file names -> Machi cluster ID/name
We will use a single random slicing map. This map (called ~Map~ in
the descriptions below), together with the random slicing hash
function (called ~rs_hash()~ below), will be used to map:
#+BEGIN_QUOTE
CoC client-visible file name -> Machi cluster ID/name/thingie
#+END_QUOTE
** Machi cluster ID/name management: TBD, but, really, should be simple
The mapping from:
#+BEGIN_QUOTE
Machi CoC member ID/name/thingie -> ???
#+END_QUOTE
... remains To Be Determined. But, really, this is going to be pretty
simple. The ID/name/thingie will probably be a human-friendly,
printable ASCII string, and the "???" will probably be a single Machi
cluster projection data structure.
The Machi projection is enough information to contact any member of
that cluster and, if necessary, request the most up-to-date projection
information required to use that cluster.
It's likely that the projection given by this map will be out-of-date,
so the client must be ready to use the standard Machi procedure to
request the cluster's current projection, in any case.
* 3. A simple illustration
I'm borrowing an illustration from the HibariDB documentation here,
but it fits my purposes quite well. (And I originally created that
image, and the use license is OK.)
#+CAPTION: Illustration of 'Map', using four Machi clusters
[[./migration-4.png]]
Assume that we have a random slicing map called ~Map~. This particular
~Map~ maps the unit interval onto 4 Machi clusters:
| Hash range | Cluster ID |
|-------------+------------|
| 0.00 - 0.25 | Cluster1 |
| 0.25 - 0.33 | Cluster4 |
| 0.33 - 0.58 | Cluster2 |
| 0.58 - 0.66 | Cluster4 |
| 0.66 - 0.91 | Cluster3 |
| 0.91 - 1.00 | Cluster4 |
Then, if we had CoC file name "~foo~", the hash ~SHA("foo")~ maps to about
0.05 on the unit interval. So, according to ~Map~, the value of
~rs_hash("foo",Map) = Cluster1~. Similarly, ~SHA("hello")~ is about
0.67 on the unit interval, so ~rs_hash("hello",Map) = Cluster3~.
* 4. An additional assumption: clients will want some control over file placement
We will continue to use the 4-cluster diagram from the previous
section.
When a client wishes to append data to a Machi file, the Machi server
chooses the file name & byte offset for storing that data. This
feature is why Machi's eventual consistency operating mode is so
nifty: it allows us to merge together files safely at any time because
any two client append operations will always write to different files
& different offsets.
** Our new assumption: client control over initial file placement
The CoC management scheme may decide that files need to migrate to
other clusters. The reason could be for storage load or I/O load
balancing reasons. It could be because a cluster is being
decomissioned by its owners. There are many legitimate reasons why a
file that is initially created on cluster ID X has been moved to
cluster ID Y.
However, there are also legitimate reasons for why the client would want
control over the choice of Machi cluster when the data is first
written. The single biggest reason is load balancing. Assuming that
the client (or the CoC management layer acting on behalf of the CoC
client) knows the current utilization across the participating Machi
clusters, then it may be very helpful to send new append() requests to
under-utilized clusters.
** Cool! Except for a couple of problems...
If the client wants to store some data
on Cluster2 and therefore sends an ~append("foo",CoolData)~ request to
the head of Cluster2 (which the client magically knows how to
contact), then the result will look something like
~{ok,"foo.s923.z47",ByteOffset}~.
Therefore, the file name "~foo.s923.z47~" must be used by any Machi
CoC client in order to retrieve the CoolData bytes.
*** Problem #1: "foo.s923.z47" doesn't always map via random slicing to Cluster2
... if we ignore the problem of "CoC files may be redistributed in the
future", then we still have a problem.
In fact, the value of ~ps_hash("foo.s923.z47",Map)~ is Cluster1.
*** Problem #2: We want CoC files to move around automatically
If the CoC client stores two pieces of information, the file name
"~foo.s923.z47~" and the Cluster ID Cluster2, then what happens when the
cluster-of-clusters system decides to rebalance files across all
machines? The CoC manager may decide to move our file to Cluster66.
How will a future CoC client wishes to retrieve CoolData when Cluster2
no longer stores the required file?
**** When migrating the file, we could put a "pointer" on Cluster2 that points to the new location, Cluster66.
This scheme is a bit brittle, even if all of the pointers are always
created 100% correctly. Also, if Cluster2 is ever unavailable, then
we cannot fetch our CoolData, even though the file moved away from
Cluster2 several years ago.
The scheme would also introduce extra round-trips to the servers
whenever we try to read a file where we do not know the most
up-to-date cluster ID for.
**** We could store a pointer to file "foo.s923.z47"'s location in an LDAP database!
Or we could store it in Riak. Or in another, external database. We'd
rather not create such an external dependency, however. Furthermore,
we would also have the same problem of updating this external database
each time that a file is moved/rebalanced across the CoC.
* 5. Proposal: Break the opacity of Machi file names, slightly
Assuming that Machi keeps the scheme of creating file names (in
response to ~append()~ and ~sequencer_new_range()~ calls) based on a
predictable client-supplied prefix and an opaque suffix, e.g.,
~append("foo",CoolData) -> {ok,"foo.s923.z47",ByteOffset}.~
... then we propose that all CoC and Machi parties be aware of this
naming scheme, i.e. that Machi assigns file names based on:
~ClientSuppliedPrefix ++ "." ++ SomeOpaqueFileNameSuffix~
The Machi system doesn't care about the file name -- a Machi server
will treat the entire file name as an opaque thing. But this document
is called the "Name Game" for a reason!
What if the CoC client could peek inside of the opaque file name
suffix in order to remove (or add) the CoC location information that
we need?
** The details: legend
- ~T~ = the target CoC member/Cluster ID chosen at the time of ~append()~
- ~p~ = file prefix, chosen by the CoC client (This is exactly the Machi client-chosen file prefix).
- ~s.z~ = the Machi file server opaque file name suffix (Which we
happen to know is a combination of sequencer ID plus file serial
number. This implementation may change, for example, to use a
standard GUID string (rendered into ASCII hexadecimal digits) instead.)
- ~K~ = the CoC placement key
We use a variation of ~rs_hash()~, called ~rs_hash_with_float()~. The
former uses a string as its 1st argument; the latter uses a floating
point number as its 1st argument. Both return a cluster ID name
thingie.
#+BEGIN_SRC erlang
%% type specs, Erlang style
-spec rs_hash(string(), rs_hash:map()) -> rs_hash:cluster_id().
-spec rs_hash_with_float(float(), rs_hash:map()) -> rs_hash:cluster_id().
#+END_SRC
NOTE: Use of floating point terms is not required. For example,
integer arithmetic could be used, if using a sufficiently large
interval to create an even & smooth distribution of hashes across the
expected maximum number of clusters.
For example, if the maximum CoC cluster size would be 4,000 individual
Machi clusters, then a minimum of 12 bits of integer space is required
to assign one integer per Machi cluster. However, for load balancing
purposes, a finer grain of (for example) 100 integers per Machi
cluster would permit file migration to move increments of
approximately 1% of single Machi cluster's storage capacity. A
minimum of 19 bits of hash space would be necessary to accomodate
these constraints.
** The details: CoC file write
1. CoC client chooses ~p~ and ~T~ (i.e., the file prefix & target cluster)
2. CoC client knows the CoC ~Map~
3. CoC client calculates a value ~K~ such that ~rs_hash_with_float(K,Map) = T~, using the method described below.
4. CoC client requests @ cluster ~T~: ~append_chunk(p,K,...) -> {ok,p.K.s.z,ByteOffset}~
5. CoC stores/uses the file name ~p.K.s.z~.
** The details: CoC file read
1. CoC client knows the file name ~p.K.s.z~ and parses it to find
~K~'s value.
2. CoC client knows the CoC ~Map~
3. Coc calculates ~rs_hash_with_float(K,Map) = T~
4. CoC client requests @ cluster ~T~: ~read_chunk(p.K.s.z,...) ->~ ... success!
** The details: calculating 'K', the CoC placement key
1. We know ~Map~, the current CoC mapping.
2. We look inside of ~Map~, and we find all of the unit interval ranges
that map to our desired target cluster ~T~. Let's call this list
~MapList = [Range1=(start,end],Range2=(start,end],...]~.
3. In our example, ~T=Cluster2~. The example ~Map~ contains a single
unit interval range for ~Cluster2~, ~[(0.33,0.58]]~.
4. Choose a uniformally random number ~r~ on the unit interval.
5. Calculate placement key ~K~ by mapping ~r~ onto the concatenation
of the CoC hash space range intervals in ~MapList~. For example,
if ~r=0.5~, then ~K = 0.33 + 0.5*(0.58-0.33) = 0.455~, which is
exactly in the middle of the ~(0.33,0.58]~ interval.
6. If necessary, encode ~K~ in a file name-friendly manner, e.g., convert it to hexadecimal ASCII digits to create file name ~p.K.s.z~.
** The details: calculating 'K', an alternative method
If the Law of Large Numbers and our random number generator do not create the kind of smooth & even distribution of files across the CoC as we wish, an alternative method of calculating ~K~ follows.
If each server in each Machi cluster keeps track of the CoC ~Map~ and also of all values of ~K~ for all files that it stores, then we can simply ask a cluster member to recommend a value of ~K~ that is least represented by existing files.
* 6. File migration (aka rebalancing/reparitioning/redistribution)
** What is "file migration"?
As discussed in section 5, the client can have good reason for wanting
to have some control of the initial location of the file within the
cluster. However, the cluster manager has an ongoing interest in
balancing resources throughout the lifetime of the file. Disks will
get full, hardware will change, read workload will fluctuate,
etc etc.
This document uses the word "migration" to describe moving data from
one CoC cluster to another. In other systems, this process is
described with words such as rebalancing, repartitioning, and
resharding. For Riak Core applications, the mechanisms are "handoff"
and "ring resizing". See the [[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer][Hadoop file balancer]] for another example.
A simple variation of the Random Slicing hash algorithm can easily
accomodate Machi's need to migrate files without interfering with
availability. Machi's migration task is much simpler due to the
immutable nature of Machi file data.
** Change to Random Slicing
The map used by the Random Slicing hash algorithm needs a few simple
changes to make file migration straightforward.
- Add a "generation number", a strictly increasing number (similar to
a Machi cluster's "epoch number") that reflects the history of
changes made to the Random Slicing map
- Use a list of Random Slicing maps instead of a single map, one map
per possibility that files may not have been migrated yet out of
that map.
As an example:
#+CAPTION: Illustration of 'Map', using four Machi clusters
[[./migration-3to4.png]]
And the new Random Slicing map might look like this:
| Generation number | 7 |
|-------------------+------------|
| SubMap | 1 |
|-------------------+------------|
| Hash range | Cluster ID |
|-------------------+------------|
| 0.00 - 0.33 | Cluster1 |
| 0.33 - 0.66 | Cluster2 |
| 0.66 - 1.00 | Cluster3 |
|-------------------+------------|
| SubMap | 2 |
|-------------------+------------|
| Hash range | Cluster ID |
|-------------------+------------|
| 0.00 - 0.25 | Cluster1 |
| 0.25 - 0.33 | Cluster4 |
| 0.33 - 0.58 | Cluster2 |
| 0.58 - 0.66 | Cluster4 |
| 0.66 - 0.91 | Cluster3 |
| 0.91 - 1.00 | Cluster4 |
When a new Random Slicing map contains a single submap, then its use
is identical to the original Random Slicing algorithm. If the map
contains multiple submaps, then the access rules change a bit:
- Write operations always go to the latest/largest submap.
- Read operations attempt to read from all unique submaps.
- Skip searching submaps that refer to the same cluster ID.
- In this example, unit interval value 0.10 is mapped to Cluster1
by both submaps.
- Read from latest/largest submap to oldest/smallest submap.
- If not found in any submap, search a second time (to handle races
with file copying between submaps).
- If the requested data is found, optionally copy it directly to the
latest submap (as a variation of read repair which really simply
accelerates the migration process and can reduce the number of
operations required to query servers in multiple submaps).
The cluster-of-clusters manager is responsible for:
- Managing the various generations of the CoC Random Slicing maps,
including distributing them to CoC clients.
- Managing the processes that are responsible for copying "cold" data,
i.e., files data that is not regularly accessed, to its new submap
location.
- When migration of a file to its new cluster is confirmed successful,
delete it from the old cluster.
In example map #7, the CoC manager will copy files with unit interval
assignments in ~(0.25,0.33]~, ~(0.58,0.66]~, and ~(0.91,1.00]~ from their
old locations in cluster IDs Cluster1/2/3 to their new cluster,
Cluster4. When the CoC manager is satisfied that all such files have
been copied to Cluster4, then the CoC manager can create and
distribute a new map, such as:
| Generation number | 8 |
|-------------------+------------|
| SubMap | 1 |
|-------------------+------------|
| Hash range | Cluster ID |
|-------------------+------------|
| 0.00 - 0.25 | Cluster1 |
| 0.25 - 0.33 | Cluster4 |
| 0.33 - 0.58 | Cluster2 |
| 0.58 - 0.66 | Cluster4 |
| 0.66 - 0.91 | Cluster3 |
| 0.91 - 1.00 | Cluster4 |
One limitation of HibariDB that I haven't fixed is not being able to
perform more than one migration at a time. The trade-off is that such
migration is difficult enough across two submaps; three or more
submaps becomes even more complicated.
Fortunately for Machi, its file data is immutable and therefore can
easily manage many migrations in parallel, i.e., its submap list may
be several maps long, each one for an in-progress file migration.
* Acknowledgements
The source for the "migration-4.png" and "migration-3to4.png" images
come from the [[http://hibari.github.io/hibari-doc/images/migration-3to4.png][HibariDB documentation]].

View file

@ -1,103 +0,0 @@
#FIG 3.2 Produced by xfig version 3.2.5b
Landscape
Center
Inches
Letter
94.00
Single
-2
1200 2
6 7425 2700 8700 3300
4 0 0 50 -1 2 18 0.0000 4 195 645 7425 2895 After\001
4 0 0 50 -1 2 18 0.0000 4 255 1215 7425 3210 Migration\001
-6
6 7425 450 8700 1050
4 0 0 50 -1 2 18 0.0000 4 195 780 7425 675 Before\001
4 0 0 50 -1 2 18 0.0000 4 255 1215 7425 990 Migration\001
-6
6 75 1425 6900 2325
6 4875 1425 6900 2325
6 5400 1575 6375 2175
4 0 0 50 -1 2 14 0.0000 4 165 390 5400 1800 Not\001
4 0 0 50 -1 2 14 0.0000 4 225 945 5400 2100 migrated\001
-6
2 2 1 2 0 7 50 -1 -1 6.000 0 0 -1 0 0 5
4950 1500 6825 1500 6825 2250 4950 2250 4950 1500
-6
6 2475 1425 4500 2325
6 3000 1575 3975 2175
4 0 0 50 -1 2 14 0.0000 4 165 390 3000 1800 Not\001
4 0 0 50 -1 2 14 0.0000 4 225 945 3000 2100 migrated\001
-6
2 2 1 2 0 7 50 -1 -1 6.000 0 0 -1 0 0 5
2550 1500 4425 1500 4425 2250 2550 2250 2550 1500
-6
6 75 1425 2100 2325
6 600 1575 1575 2175
4 0 0 50 -1 2 14 0.0000 4 165 390 600 1800 Not\001
4 0 0 50 -1 2 14 0.0000 4 225 945 600 2100 migrated\001
-6
2 2 1 2 0 7 50 -1 -1 6.000 0 0 -1 0 0 5
150 1500 2025 1500 2025 2250 150 2250 150 1500
-6
-6
2 1 0 2 0 7 50 -1 -1 6.000 0 0 -1 1 0 2
1 1 3.00 60.00 120.00
150 4200 150 3750
2 1 0 2 0 7 50 -1 -1 6.000 0 0 -1 1 0 2
1 1 3.00 60.00 120.00
3750 4200 3750 3750
2 1 0 2 0 7 50 -1 -1 6.000 0 0 -1 1 0 2
1 1 3.00 60.00 120.00
2025 4200 2025 3750
2 1 0 2 0 7 50 -1 -1 6.000 0 0 -1 1 0 2
1 1 3.00 60.00 120.00
7350 4200 7350 3750
2 1 0 2 0 7 50 -1 -1 6.000 0 0 -1 1 0 2
1 1 3.00 60.00 120.00
5550 4200 5550 3750
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
2550 0 2550 1500 150 1500 150 0 2550 0
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
4950 0 4950 1500 2550 1500 2550 0 4950 0
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
7350 0 7350 1500 4950 1500 4950 0 7350 0
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
150 2250 2025 2250 2025 3750 150 3750 150 2250
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
4425 2250 4950 2250 4950 3750 4425 3750 4425 2250
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
4950 2250 6825 2250 6825 3750 4950 3750 4950 2250
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
6825 2250 7350 2250 7350 3750 6825 3750 6825 2250
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
2025 2250 2550 2250 2550 3750 2025 3750 2025 2250
2 2 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
2550 2250 4425 2250 4425 3750 2550 3750 2550 2250
4 0 0 50 -1 2 18 0.0000 4 195 480 75 4500 0.00\001
4 0 0 50 -1 2 18 0.0000 4 195 480 6825 4500 1.00\001
4 0 0 50 -1 2 18 0.0000 4 195 480 1725 4500 0.25\001
4 0 0 50 -1 2 18 0.0000 4 195 480 3525 4500 0.50\001
4 0 0 50 -1 2 18 0.0000 4 195 480 5250 4500 0.75\001
4 0 0 50 -1 2 14 0.0000 4 240 1710 450 1275 ~33% total keys\001
4 0 0 50 -1 2 14 0.0000 4 240 1710 2925 1275 ~33% total keys\001
4 0 0 50 -1 2 14 0.0000 4 240 1710 5250 1275 ~33% total keys\001
4 0 0 50 -1 2 14 0.0000 4 180 495 2025 3525 ~8%\001
4 0 0 50 -1 2 14 0.0000 4 240 1710 300 3525 ~25% total keys\001
4 0 0 50 -1 2 14 0.0000 4 240 1710 2625 3525 ~25% total keys\001
4 0 0 50 -1 2 14 0.0000 4 180 495 4425 3525 ~8%\001
4 0 0 50 -1 2 14 0.0000 4 240 1710 5025 3525 ~25% total keys\001
4 0 0 50 -1 2 14 0.0000 4 180 495 6825 3525 ~8%\001
4 0 0 50 -1 2 24 0.0000 4 270 195 2175 3075 4\001
4 0 0 50 -1 2 24 0.0000 4 270 195 4575 3075 4\001
4 0 0 50 -1 2 24 0.0000 4 270 195 6975 3075 4\001
4 0 0 50 -1 2 24 0.0000 4 270 1245 600 600 Chain1\001
4 0 0 50 -1 2 24 0.0000 4 270 1245 3000 600 Chain2\001
4 0 0 50 -1 2 24 0.0000 4 270 1245 5400 600 Chain3\001
4 0 0 50 -1 2 24 0.0000 4 270 285 2100 2625 C\001
4 0 0 50 -1 2 24 0.0000 4 270 285 4500 2625 C\001
4 0 0 50 -1 2 24 0.0000 4 270 285 6900 2625 C\001
4 0 0 50 -1 2 24 0.0000 4 270 1245 525 2850 Chain1\001
4 0 0 50 -1 2 24 0.0000 4 270 1245 2925 2850 Chain2\001
4 0 0 50 -1 2 24 0.0000 4 270 1245 5325 2850 Chain3\001
4 0 0 50 -1 2 18 0.0000 4 240 4350 1350 4875 Cluster locator, on the unit interval\001

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.4 KiB

View file

@ -1,481 +0,0 @@
-*- mode: org; -*-
#+TITLE: Machi cluster "name game" sketch
#+AUTHOR: Scott
#+STARTUP: lognotedone hidestars indent showall inlineimages
#+SEQ_TODO: TODO WORKING WAITING DONE
#+COMMENT: M-x visual-line-mode
#+COMMENT: Also, disable auto-fill-mode
* 1. "Name Games" with random-slicing style consistent hashing
Our goal: to distribute lots of files very evenly across a large
collection of individual, small Machi chains.
* 2. Assumptions
** Basic familiarity with Machi high level design and Machi's "projection"
The [[https://github.com/basho/machi/blob/master/doc/high-level-machi.pdf][Machi high level design document]] contains all of the basic
background assumed by the rest of this document.
** Analogy: "neighborhood : city :: Machi chain : Machi cluster"
Analogy: The word "machi" in Japanese means small town or
neighborhood. As the Tokyo Metropolitan Area is built from many
machis and smaller cities, therefore a big, partitioned file store can
be built out of many small Machi chains.
** Familiarity with the Machi chain concept
It's clear (I hope!) from
the [[https://github.com/basho/machi/blob/master/doc/high-level-machi.pdf][Machi high level design document]] that Machi alone does not support
any kind of file partitioning/distribution/sharding across multiple
small Machi chains. There must be another layer above a Machi chain to
provide such partitioning services.
Using the [[https://github.com/basho/machi/tree/master/prototype/demo-day-hack][cluster quick-and-dirty prototype]] as an
architecture sketch, let's now assume that we have ~n~ independent Machi
chains. We assume that each of these chains has the same
chain length in the nominal case, e.g. chain length of 3.
We wish to provide partitioned/distributed file storage
across all ~n~ chains. We call the entire collection of ~n~ Machi
chains a "cluster".
We may wish to have several types of Machi clusters. For example:
+ Chain length of 1 for "don't care if it gets lost,
store stuff very very cheaply" data.
+ Chain length of 2 for normal data.
+ Equivalent to quorum replication's reliability with 3 copies.
+ Chain length of 7 for critical, unreplaceable data.
+ Equivalent to quorum replication's reliability with 15 copies.
Each of these types of chains will have a name ~N~ in the
namespace. The role of the cluster namespace will be demonstrated in
Section 3 below.
** Continue an early assumption: a Machi chain is unaware of clustering
Let's continue with an assumption that an individual Machi chain
inside of a cluster is completely unaware of the cluster layer.
** The reader is familiar with the random slicing technique
I'd done something very-very-nearly-like-this for the Hibari database
6 years ago. But the Hibari technique was based on stuff I did at
Sendmail, Inc, in 2000, so this technique feels like old news to me.
{shrug}
The following section provides an illustrated example.
Very quickly, the random slicing algorithm is:
- Hash a string onto the unit interval [0.0, 1.0)
- Calculate h(unit interval point, Map) -> bin, where ~Map~ divides
the unit interval into bins (or partitions or shards).
Machi's adaptation is in step 1: we do not hash any strings. Instead, we
simply choose a number on the unit interval. This number is called
the "cluster locator number".
As described later in this doc, Machi file names are structured into
several components. One component of the file name contains the cluster
locator number; we use the number as-is for step 2 above.
*** For more information about Random Slicing
For a comprehensive description of random slicing, please see the
first two papers. For a quicker summary, please see the third
reference.
#+BEGIN_QUOTE
Reliable and Randomized Data Distribution Strategies for Large Scale Storage Systems
Alberto Miranda et al.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.226.5609
(short version, HIPC'11)
Random Slicing: Efficient and Scalable Data Placement for Large-Scale
Storage Systems
Alberto Miranda et al.
DOI: http://dx.doi.org/10.1145/2632230 (long version, ACM Transactions
on Storage, Vol. 10, No. 3, Article 9, 2014)
[[http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration][Hibari Sysadmin Guide, chain migration section]].
http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration
#+END_QUOTE
* 3. A simple illustration
We use a variation of the Random Slicing hash that we will call
~rs_hash_with_float()~. The Erlang-style function type is shown
below.
#+BEGIN_SRC erlang
%% type specs, Erlang-style
-spec rs_hash_with_float(float(), rs_hash:map()) -> rs_hash:chain_id().
#+END_SRC
I'm borrowing an illustration from the HibariDB documentation here,
but it fits my purposes quite well. (I am the original creator of that
image, and also the use license is compatible.)
#+CAPTION: Illustration of 'Map', using four Machi chains
[[./migration-4.png]]
Assume that we have a random slicing map called ~Map~. This particular
~Map~ maps the unit interval onto 4 Machi chains:
| Hash range | Chain ID |
|-------------+----------|
| 0.00 - 0.25 | Chain1 |
| 0.25 - 0.33 | Chain4 |
| 0.33 - 0.58 | Chain2 |
| 0.58 - 0.66 | Chain4 |
| 0.66 - 0.91 | Chain3 |
| 0.91 - 1.00 | Chain4 |
Assume that the system chooses a cluster locator of 0.05.
According to ~Map~, the value of
~rs_hash_with_float(0.05,Map) = Chain1~.
Similarly, ~rs_hash_with_float(0.26,Map) = Chain4~.
This example should look very similar to Hibari's technique.
The Hibari documentation has a brief photo illustration of how random
slicing works, see [[http://hibari.github.io/hibari-doc/hibari-sysadmin-guide.en.html#chain-migration][Hibari Sysadmin Guide, chain migration]].
* 4. Use of the cluster namespace: name separation plus chain type
Let us assume that the cluster framework provides several different types
of chains:
| Chain length | Namespace | Consistency Mode | Comment |
|--------------+--------------+------------------+----------------------------------|
| 3 | ~normal~ | eventual | Normal storage redundancy & cost |
| 2 | ~reduced~ | eventual | Reduced cost storage |
| 1 | ~risky~ | eventual | Really, really cheap storage |
| 7 | ~paranoid~ | eventual | Safety-critical storage |
| 3 | ~sequential~ | strong | Strong consistency |
|--------------+--------------+------------------+----------------------------------|
The client may want to choose the amount of redundancy that its
application requires: normal, reduced cost, or perhaps even a single
copy. The cluster namespace is used by the client to signal this
intention.
Further, the cluster administrators may wish to use the namespace to
provide separate storage for different applications. Jane's
application may use the namespace "jane-normal" and Bob's app uses
"bob-reduced". Administrators may definine separate groups of
chains on separate servers to serve these two applications.
* 5. In its lifetime, a file may be moved to different chains
The cluster management scheme may decide that files need to migrate to
other chains -- i.e., file that is initially created on chain ID ~X~
has been moved to chain ID ~Y~.
+ For storage load or I/O load balancing reasons.
+ Because a chain is being decommissioned by the sysadmin.
* 6. Floating point is not required ... it is merely convenient for explanation
NOTE: Use of floating point terms is not required. For example,
integer arithmetic could be used, if using a sufficiently large
interval to create an even & smooth distribution of hashes across the
expected maximum number of chains.
For example, if the maximum cluster size would be 4,000 individual
Machi chains, then a minimum of 12 bits of integer space is required
to assign one integer per Machi chain. However, for load balancing
purposes, a finer grain of (for example) 100 integers per Machi
chain would permit file migration to move increments of
approximately 1% of single Machi chain's storage capacity. A
minimum of 12+7=19 bits of hash space would be necessary to accommodate
these constraints.
It is likely that Machi's final implementation will choose a 24 bit
integer (or perhaps 32 bits) to represent the cluster locator.
* 7. Proposal: Break the opacity of Machi file names, slightly.
Machi assigns file names based on:
~ClientSuppliedPrefix ++ "^" ++ SomeOpaqueFileNameSuffix~
What if some parts of the system could peek inside of the opaque file name
suffix in order to look at the cluster location information that we might
code in the filename suffix?
We break the system into parts that speak two levels of protocols,
"high" and "low".
+ The high level protocol is used outside of the Machi cluster
+ The low level protocol is used inside of the Machi cluster
Both protocols are based on a Protocol Buffers specification and
implementation. Other protocols, such as HTTP, will be added later.
#+BEGIN_SRC
+-----------------------+
| Machi external client |
| e.g. Riak CS |
+-----------------------+
^
| Machi "high" API
| ProtoBuffs protocol Machi cluster boundary: outside
.........................................................................
| Machi cluster boundary: inside
v
+--------------------------+ +------------------------+
| Machi "high" API service | | Machi HTTP API service |
+--------------------------+ +------------------------+
^ |
| +------------------------+
v v
+------------------------+
| Cluster bridge service |
+------------------------+
^
| Machi "low" API
| ProtoBuffs protocol
+----------------------------------------+----+----+
| | | |
v v v v
+-------------------------+ ... other chains...
| Chain C1 (logical view) |
| +--------------+ |
| | FLU server 1 | |
| | +--------------+ |
| +--| FLU server 2 | |
| +--------------+ | In reality, API bridge talks directly
+-------------------------+ to each FLU server in a chain.
#+END_SRC
** The notation we use
- ~N~ = the cluster namespace, chosen by the client.
- ~p~ = file prefix, chosen by the client.
- ~L~ = the cluster locator (a number, type is implementation-dependent)
- ~Map~ = a mapping of cluster locators to chains
- ~T~ = the target chain ID/name
- ~u~ = a unique opaque file name suffix, e.g. a GUID string
- ~F~ = a Machi file name, i.e., a concatenation of ~p^L^N^u~
** The details: cluster file append
0. Cluster client chooses ~N~ and ~p~ (i.e., cluster namespace and
file prefix) and sends the append request to a Machi cluster member
via the Protocol Buffers "high" API.
1. Cluster bridge chooses ~T~ (i.e., target chain), based on criteria
such as disk utilization percentage.
2. Cluster bridge knows the cluster ~Map~ for namespace ~N~.
3. Cluster bridge choose some cluster locator value ~L~ such that
~rs_hash_with_float(L,Map) = T~ (see algorithm below).
4. Cluster bridge sends its request to chain
~T~: ~append_chunk(p,L,N,...) -> {ok,p^L^N^u,ByteOffset}~
5. Cluster bridge forwards the reply tuple to the client.
6. Client stores/uses the file name ~F = p^L^N^u~.
** The details: Cluster file read
0. Cluster client sends the read request to a Machi cluster member via
the Protocol Buffers "high" API.
1. Cluster bridge parses the file name ~F~ to find
the values of ~L~ and ~N~ (recall, ~F = p^L^N^u~).
2. Cluster bridge knows the Cluster ~Map~ for type ~N~.
3. Cluster bridge calculates ~rs_hash_with_float(L,Map) = T~
4. Cluster bridge sends request to chain ~T~:
~read_chunk(F,...) ->~ ... reply
5. Cluster bridge forwards the reply to the client.
** The details: calculating 'L' (the cluster locator number) to match a desired target chain
1. We know ~Map~, the current cluster mapping for a cluster namespace ~N~.
2. We look inside of ~Map~, and we find all of the unit interval ranges
that map to our desired target chain ~T~. Let's call this list
~MapList = [Range1=(start,end],Range2=(start,end],...]~.
3. In our example, ~T=Chain2~. The example ~Map~ contains a single
unit interval range for ~Chain2~, ~[(0.33,0.58]]~.
4. Choose a uniformly random number ~r~ on the unit interval.
5. Calculate the cluster locator ~L~ by mapping ~r~ onto the concatenation
of the cluster hash space range intervals in ~MapList~. For example,
if ~r=0.5~, then ~L = 0.33 + 0.5*(0.58-0.33) = 0.455~, which is
exactly in the middle of the ~(0.33,0.58]~ interval.
** A bit more about the cluster namespaces's meaning and use
For use by Riak CS, for example, we'd likely start with the following
namespaces ... working our way down the list as we add new features
and/or re-implement existing CS features.
- "standard" = Chain length = 3, eventually consistency mode
- "reduced" = Chain length = 2, eventually consistency mode.
- "stanchion7" = Chain length = 7, strong consistency mode. Perhaps
use this namespace for the metadata required to re-implement the
operations that are performed by today's Stanchion application.
We want the cluster framework to:
- provide means of creating and managing
chains of different types, e.g., chain length, consistency mode.
- manage the mapping of cluster namespace
names to the chains in the system.
- provide query functions to map a cluster
namespace name to a cluster map,
e.g. ~get_cluster_latest_map("reduced") -> Map{generation=7,...}~.
* 8. File migration (a.k.a. rebalancing/reparitioning/resharding/redistribution)
** What is "migration"?
This section describes Machi's file migration. Other storage systems
call this process as "rebalancing", "repartitioning", "resharding" or
"redistribution".
For Riak Core applications, it is called "handoff" and "ring resizing"
(depending on the context).
See also the [[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer][Hadoop file balancer]] for another example of a data
migration process.
As discussed in section 5, the client can have good reason for wanting
to have some control of the initial location of the file within the
chain. However, the chain manager has an ongoing interest in
balancing resources throughout the lifetime of the file. Disks will
get full, hardware will change, read workload will fluctuate,
etc etc.
This document uses the word "migration" to describe moving data from
one Machi chain to another chain within a cluster system.
A simple variation of the Random Slicing hash algorithm can easily
accommodate Machi's need to migrate files without interfering with
availability. Machi's migration task is much simpler due to the
immutable nature of Machi file data.
** Change to Random Slicing
The map used by the Random Slicing hash algorithm needs a few simple
changes to make file migration straightforward.
- Add a "generation number", a strictly increasing number (similar to
a Machi chain's "epoch number") that reflects the history of
changes made to the Random Slicing map
- Use a list of Random Slicing maps instead of a single map, one map
per chance that files may not have been migrated yet out of
that map.
As an example:
#+CAPTION: Illustration of 'Map', using four Machi chains
[[./migration-3to4.png]]
And the new Random Slicing map for some cluster namespace ~N~ might look
like this:
| Generation number / Namespace | 7 / reduced |
|-------------------------------+-------------|
| SubMap | 1 |
|-------------------------------+-------------|
| Hash range | Chain ID |
|-------------------------------+-------------|
| 0.00 - 0.33 | Chain1 |
| 0.33 - 0.66 | Chain2 |
| 0.66 - 1.00 | Chain3 |
|-------------------------------+-------------|
| SubMap | 2 |
|-------------------------------+-------------|
| Hash range | Chain ID |
|-------------------------------+-------------|
| 0.00 - 0.25 | Chain1 |
| 0.25 - 0.33 | Chain4 |
| 0.33 - 0.58 | Chain2 |
| 0.58 - 0.66 | Chain4 |
| 0.66 - 0.91 | Chain3 |
| 0.91 - 1.00 | Chain4 |
When a new Random Slicing map contains a single submap, then its use
is identical to the original Random Slicing algorithm. If the map
contains multiple submaps, then the access rules change a bit:
- Write operations always go to the newest/largest submap.
- Read operations attempt to read from all unique submaps.
- Skip searching submaps that refer to the same chain ID.
- In this example, unit interval value 0.10 is mapped to Chain1
by both submaps.
- Read from newest/largest submap to oldest/smallest submap.
- If not found in any submap, search a second time (to handle races
with file copying between submaps).
- If the requested data is found, optionally copy it directly to the
newest submap. (This is a variation of read repair (RR). RR here
accelerates the migration process and can reduce the number of
operations required to query servers in multiple submaps).
The cluster manager is responsible for:
- Managing the various generations of the cluster Random Slicing maps for
all namespaces.
- Distributing namespace maps to cluster bridges.
- Managing the processes that are responsible for copying "cold" data,
i.e., files data that is not regularly accessed, to its new submap
location.
- When migration of a file to its new chain is confirmed successful,
delete it from the old chain.
In example map #7, the cluster manager will copy files with unit interval
assignments in ~(0.25,0.33]~, ~(0.58,0.66]~, and ~(0.91,1.00]~ from their
old locations in chain IDs Chain1/2/3 to their new chain,
Chain4. When the cluster manager is satisfied that all such files have
been copied to Chain4, then the cluster manager can create and
distribute a new map, such as:
| Generation number / Namespace | 8 / reduced |
|-------------------------------+-------------|
| SubMap | 1 |
|-------------------------------+-------------|
| Hash range | Chain ID |
|-------------------------------+-------------|
| 0.00 - 0.25 | Chain1 |
| 0.25 - 0.33 | Chain4 |
| 0.33 - 0.58 | Chain2 |
| 0.58 - 0.66 | Chain4 |
| 0.66 - 0.91 | Chain3 |
| 0.91 - 1.00 | Chain4 |
The HibariDB system performs data migrations in almost exactly this
manner. However, one important
limitation of HibariDB is not being able to
perform more than one migration at a time. HibariDB's data is
mutable. Mutation causes many problems when migrating data
across two submaps; three or more submaps was too complex to implement
quickly and correctly.
Fortunately for Machi, its file data is immutable and therefore can
easily manage many migrations in parallel, i.e., its submap list may
be several maps long, each one for an in-progress file migration.
* 9. Other considerations for FLU/sequencer implementations
** Append to existing file when possible
The sequencer should always assign new offsets to the latest/newest
file for any prefix, as long as all prerequisites are also true,
- The epoch has not changed. (In AP mode, epoch change -> mandatory
file name suffix change.)
- The cluster locator number is stable.
- The latest file for prefix ~p~ is smaller than maximum file size for
a FLU's configuration.
The stability of the cluster locator number is an implementation detail that
must be managed by the cluster bridge.
Reuse of the same file is not possible if the bridge always chooses a
different cluster locator number ~L~ or if the client always uses a unique
file prefix ~p~. The latter is a sign of a misbehaved client; the
former is a poorly-implemented bridge.
* 10. Acknowledgments
The original source for the "migration-4.png" and "migration-3to4.png" images
come from the [[http://hibari.github.io/hibari-doc/images/migration-3to4.png][HibariDB documentation]].

View file

@ -1,30 +0,0 @@
# Clone and compile Machi
Clone the Machi source repo and compile the source and test code. Run
the following commands at your login shell:
cd /tmp
git clone https://github.com/basho/machi.git
cd machi
git checkout master
make # or 'gmake' if GNU make uses an alternate name
Then run the unit test suite. This may take up to two minutes or so
to finish.
make test
At the end, the test suite should report that all tests passed. The
actual number of tests shown in the "All `X` tests passed" line may be
different than the example below.
[... many lines omitted ...]
module 'event_logger'
module 'chain_mgr_legacy'
=======================================================
All 90 tests passed.
If you had a test failure, a likely cause may be a limit on the number
of file descriptors available to your user process. (Recent releases
of OS X have a limit of 1024 file descriptors, which may be too slow.)
The output of the `limit -n` will tell you your file descriptor limit.

View file

@ -1,38 +0,0 @@
## Machi developer environment prerequisites
1. Machi requires an 64-bit variant of UNIX: OS X, FreeBSD, Linux, or
Solaris machine is a standard developer environment for C and C++
applications (64-bit versions).
2. You'll need the `git` source management utility.
3. You'll need the 64-bit Erlang/OTP 17 runtime environment. Please
don't use earlier or later versions until we have a chance to fix
the compilation warnings that versions R16B and 18 will trigger.
Also, please verify that you are not using a 32-bit Erlang/OTP
runtime package.
For `git` and the Erlang runtime, please use your OS-specific
package manager to install these. If your package manager doesn't
have 64-bit Erlang/OTP version 17 available, then we recommend using the
[precompiled packages available at Erlang Solutions](https://www.erlang-solutions.com/resources/download.html).
Also, please verify that you have enough file descriptors available to
your user processes. The output of `ulimit -n` should report at least
4,000 file descriptors available. If your limit is lower (a frequent
problem for OS X users), please increase it to at least 4,000.
# Using Vagrant to set up a developer environment for Machi
The Machi source directory contains a `Vagrantfile` for creating an
Ubuntu Linux-based virtual machine for compiling and running Machi.
This file is in the
[$SRC_TOP/priv/humming-consensus-demo.vagrant](../priv/humming-consensus-demo.vagrant)
directory.
If used as-is, the virtual machine specification is modest.
* 1 virtual CPU
* 512MB virtual memory
* 768MB swap space
* 79GB sparse virtual disk image. After installing prerequisites and
compiling Machi, the root file system uses approximately 2.7 GBytes.

View file

@ -1,617 +0,0 @@
FLU and Chain Life Cycle Management -*- mode: org; -*-
#+STARTUP: lognotedone hidestars indent showall inlineimages
#+COMMENT: To generate the outline section: egrep '^\*[*]* ' doc/flu-and-chain-lifecycle.org | egrep -v '^\* Outline' | sed -e 's/^\*\*\* / + /' -e 's/^\*\* / + /' -e 's/^\* /+ /'
* FLU and Chain Life Cycle Management
In an ideal world, we (the Machi development team) would have a full
vision of how Machi would be managed, down to the last detail of
beautiful CLI character and network protocol bit. Our vision isn't
complete yet, so we are working one small step at a time.
* Outline
+ FLU and Chain Life Cycle Management
+ Terminology review
+ Terminology: Machi run-time components/services/thingies
+ Terminology: Machi chain data structures
+ Terminology: Machi cluster data structures
+ Overview of administrative life cycles
+ Cluster administrative life cycle
+ Chain administrative life cycle
+ FLU server administrative life cycle
+ Quick admin: declarative management of Machi FLU and chain life cycles
+ Quick admin uses the "rc.d" config scheme for life cycle management
+ Quick admin's declarative "language": an Erlang-flavored AST
+ Term 'host': define a new host for FLU services
+ Term 'flu': define a new FLU
+ Term 'chain': define or reconfigure a chain
+ Executing quick admin AST files via the 'machi-admin' utility
+ Checking the syntax of an AST file
+ Executing an AST file
+ Using quick admin to manage multiple machines
+ The "rc.d" style configuration file scheme
+ Riak had a similar configuration file editing problem (and its solution)
+ Machi's "rc.d" file scheme.
+ FLU life cycle management using "rc.d" style files
+ The key configuration components of a FLU
+ Chain life cycle management using "rc.d" style files
+ The key configuration components of a chain
* Terminology review
** Terminology: Machi run-time components/services/thingies
+ FLU: a basic Machi server, responsible for managing a collection of
files.
+ Chain: a small collection of FLUs that maintain replicas of the same
collection of files. A chain is usually small, 1-3 servers, where
more than 3 would be used only in cases when availability of
certain data is critical despite failures of several machines.
+ The length of a chain is directly proportional to its
replication factor, e.g., a chain length=3 will maintain
(nominally) 3 replicas of each file.
+ To maintain file availability when ~F~ failures have occurred, a
chain must be at least ~F+1~ members long. (In comparison, the
quorum replication technique requires ~2F+1~ members in the
general case.)
+ Cluster: A collection of Machi chains that are used to store files
in a horizontally partitioned/sharded/distributed manner.
** Terminology: Machi data structures
+ Projection: used to define a single chain: the chain's consistency
mode (strong or eventual consistency), all members (from an
administrative point of view), all active members (from a runtime,
automatically-managed point of view), repairing/file-syncing members
(also runtime, auto-managed), and so on
+ Epoch: A version number of a projection. The epoch number is used
by both clients & servers to manage transitions from one projection
to another, e.g., when the chain is temporarily shortened by the
failure of a member FLU server.
** Terminology: Machi cluster data structures
+ Namespace: A collection of human-friendly names that are mapped to
groups of Machi chains that provide the same type of storage
service: consistency mode, replication policy, etc.
+ A single namespace name, e.g. ~normal-ec~, is paired with a single
cluster map (see below).
+ Example: ~normal-ec~ might be a collection of Machi chains in
eventually-consistent mode that are of length=3.
+ Example: ~risky-ec~ might be a collection of Machi chains in
eventually-consistent mode that are of length=1.
+ Example: ~mgmt-critical~ might be a collection of Machi chains in
strongly-consistent mode that are of length=7.
+ Cluster map: Encodes the rules which partition/shard/distribute
the files stored in a particular namespace across a group of chains
that collectively store the namespace's files.
+ Chain weight: A value assigned to each chain within a cluster map
structure that defines the relative storage capacity of a chain
within the namespace. For example, a chain weight=150 has 50% more
capacity than a chain weight=100.
+ Cluster map epoch: The version number assigned to a cluster map.
* Overview of administrative life cycles
** Cluster administrative life cycle
+ Cluster is first created
+ Adds namespaces (e.g. consistency policy + chain length policy) to
the cluster
+ Chains are added to/removed from a namespace to increase/decrease the
namespace's storage capacity.
+ Adjust chain weights within a namespace, e.g., to shift files
within the namespace to chains with greater storage capacity
resources and/or runtime I/O resources.
A cluster "file migration" is the process of moving files from one
namespace member chain to another for purposes of shifting &
re-balancing storage capacity and/or runtime I/O capacity.
** Chain administrative life cycle
+ A chain is created with an initial FLU membership list.
+ Chain may be administratively modified zero or more times to
add/remove member FLU servers.
+ A chain may be decommissioned.
See also: http://basho.github.io/machi/edoc/machi_lifecycle_mgr.html
** FLU server administrative life cycle
+ A FLU is created after an administrator chooses the FLU's runtime
location is selected by the administrator: which machine/virtual
machine, IP address and TCP port allocation, etc.
+ An unassigned FLU may be added to a chain by chain administrative
policy.
+ A FLU that is assigned to a chain may be removed from that chain by
chain administrative policy.
+ In the current implementation, the FLU's Erlang processes will be
halted. Then the FLU's data and metadata files will be moved to
another area of the disk for safekeeping. Later, a "garbage
collection" process can be used for reclaiming disk space used by
halted FLU servers.
See also: http://basho.github.io/machi/edoc/machi_lifecycle_mgr.html
* Quick admin: declarative management of Machi FLU and chain life cycles
The "quick admin" scheme is a temporary (?) tool for managing Machi
FLU server and chain life cycles in a declarative manner. The API is
described in this section.
** Quick admin uses the "rc.d" config scheme for life cycle management
As described at the top of
http://basho.github.io/machi/edoc/machi_lifecycle_mgr.html, the "rc.d"
config files do not manage "policy". "Policy" is doing the right
thing with a Machi cluster from a systems administrator's
point of view. The "rc.d" config files can only implement decisions
made according to policy.
The "quick admin" tool is a first attempt at automating policy
decisions in a safe way (we hope) that is also easy to implement (we
hope) with a variety of systems management tools, e.g. Chef, Puppet,
Ansible, Saltstack, or plain-old-human-at-a-keyboard.
** Quick admin's declarative "language": an Erlang-flavored AST
The "language" that an administrator uses to express desired policy
changes is not (yet) a true language. As a quick implementation hack,
the current language is an Erlang-flavored abstract syntax tree
(AST). The tree isn't very deep, either, frequently just one
element tall. (Not much of a tree, is it?)
There are three terms in the language currently:
+ ~host~, define a new host that can execute FLU servers
+ ~flu~, define a new FLU
+ ~chain~, define a new chain or re-configure an existing chain with
the same name
*** Term 'host': define a new host for FLU services
In this context, a host is a machine, virtual machine, or container
that can execute the Machi application and can therefore provide FLU
services, i.e. file service, Humming Consensus management.
Two formats may be used to define a new host:
#+BEGIN_SRC
{host, Name, Props}.
{host, Name, AdminI, ClientI, Props}.
#+END_SRC
The shorter tuple is shorthand notation for the latter. If the
shorthand form is used, then it will be converted automatically to the
long form as:
#+BEGIN_SRC
{host, Name, AdminI=Name, ClientI=Name, Props}.
#+END_SRC
Type information, description, and restrictions:
+ ~Name::string()~ The ~Name~ attribute must be unique. Note that it
is possible to define two different hosts, one using a DNS hostname
and one using an IP address. The user must avoid this
double-definition because it is not enforced by quick admin.
+ The ~Name~ field is used for cross-reference purposes with other
terms, e.g., ~flu~ and ~chain~.
+ There is no syntax yet for removing a host definition.
+ ~AdminI::string()~ A DNS hostname or IP address for cluster
administration purposes, e.g. SSH access.
+ This field is unused at the present time.
+ ~ClientI::string()~ A DNS hostname or IP address for Machi's client
protocol access, e.g., Protocol Buffers network API service.
+ This field is unused at the present time.
+ ~props::proplist()~ is an Erlang-style property list for specifying
additional configuration options, debugging information, sysadmin
comments, etc.
+ A full-featured admin tool should also include managing several
other aspects of configuration related to a "host". For example,
for any single IP address, quick admin assumes that there will be
exactly one Erlang VM that is running the Machi application. Of
course, it is possible to have dozens of Erlang VMs on the same
(let's assume for clarity) hardware machine and all running Machi
... but there are additional aspects of such a machine that quick
admin does not account for
+ multiple IP addresses per machine
+ multiple Machi package installation paths
+ multiple Machi config files (e.g. cuttlefish config, ~etc.conf~,
~vm.args~)
+ multiple data directories/file system mount points
+ This is also a management problem for quick admin for a single
Machi package on a machine to take advantage of bulk data
storage using multiple multiple file system mount points.
+ multiple Erlang VM host names, required for distributed Erlang,
which is used for communication with ~machi~ and ~machi-admin~
command line utilities.
+ and others....
*** Term 'flu': define a new FLU
A new FLU is defined relative to a previously-defined ~host~ entities;
an exception will be thrown if the ~host~ cannot be cross-referenced.
#+BEGIN_SRC
{flu, Name, HostName, Port, Props}
#+END_SRC
Type information, description, and restrictions:
+ ~Name::atom()~ The name of the FLU, as a human-friendly name and
also for internal management use; please note the ~atom()~ type.
This name must be unique.
+ The ~Name~ field is used for cross-reference purposes with the
~chain~ term.
+ There is no syntax yet for removing a FLU definition.
+ ~Hostname::string()~ The cross-reference name of the ~host~ that
this FLU should run on.
+ ~Port::non_neg_integer()~ The TCP port used by this FLU server's
Protocol Buffers network API listener service
+ ~props::proplist()~ is an Erlang-style property list for specifying
additional configuration options, debugging information, sysadmin
comments, etc.
*** Term 'chain': define or reconfigure a chain
A chain is defined relative to zero or more previously-defined ~flu~
entities; an exception will be thrown if any ~flu~ cannot be
cross-referenced.
Two formats may be used to define/reconfigure a chain:
#+BEGIN_SRC
{chain, Name, FullList, Props}.
{chain, Name, CMode, FullList, Witnesses, Props}.
#+END_SRC
The shorter tuple is shorthand notation for the latter. If the
shorthand form is used, then it will be converted automatically to the
long form as:
#+BEGIN_SRC
{chain, Name, ap_mode, FullList, [], Props}.
#+END_SRC
Type information, description, and restrictions:
+ ~Name::atom()~ The name of the chain, as a human-friendly name and
also for internal management use; please note the ~atom()~ type.
This name must be unique.
+ There is no syntax yet for removing a chain definition.
+ ~CMode::'ap_mode'|'cp_mode'~ Defines the consistency mode of the
chain, either eventual consistency or strong consistency,
respectively.
+ A chain cannot change consistency mode, e.g., from
strong~->~eventual consistency.
+ ~FullList::list(atom())~ Specifies the list of full-service FLU
servers, i.e. servers that provide file data & metadata services as
well as Humming Consensus. Each atom in the list must
cross-reference with a previously defined ~chain~; an exception will
be thrown if any ~flu~ cannot be cross-referenced.
+ ~Witnesses::list(atom())~ Specifies the list of witness-only
servers, i.e. servers that only participate in Humming Consensus.
Each atom in the list must cross-reference with a previously defined
~chain~; an exception will be thrown if any ~flu~ cannot be
cross-referenced.
+ This list must be empty for eventual consistency chains.
+ ~props::proplist()~ is an Erlang-style property list for specifying
additional configuration options, debugging information, sysadmin
comments, etc.
+ If this term specifies a new ~chain~ name, then all of the member
FLU servers (full & witness types) will be bootstrapped to a
starting configuration.
+ If this term specifies a previously-defined ~chain~ name, then all
of the member FLU servers (full & witness types, respectively) will
be adjusted to add or remove members, as appropriate.
+ Any FLU servers added to either list must not be assigned to any
other chain, or they must be a member of this specific chain.
+ Any FLU servers removed from either list will be halted.
(See the "FLU server administrative life cycle" section above.)
** Executing quick admin AST files via the 'machi-admin' utility
Examples of quick admin AST files can be found in the
~priv/quick-admin/examples~ directory. Below is an example that will
define a new host ( ~"localhost"~ ), three new FLU servers ( ~f1~ & ~f2~
and ~f3~ ), and an eventually consistent chain ( ~c1~ ) that uses the new
FLU servers:
#+BEGIN_SRC
{host, "localhost", []}.
{flu,f1,"localhost",20401,[]}.
{flu,f2,"localhost",20402,[]}.
{flu,f3,"localhost",20403,[]}.
{chain,c1,[f1,f2,f3],[]}.
#+END_SRC
*** Checking the syntax of an AST file
Given an AST config file, ~/path/to/ast/file~, its basic syntax and
correctness can be checked without executing it.
#+BEGIN_SRC
./rel/machi/bin/machi-admin quick-admin-check /path/to/ast/file
#+END_SRC
+ The utility will exit with status zero and output ~ok~ if the syntax
and proposed configuration appears to be correct.
+ If there is an error, the utility will exit with status one, and an
error message will be printed.
*** Executing an AST file
Given an AST config file, ~/path/to/ast/file~, it can be executed
using the command:
#+BEGIN_SRC
./rel/machi/bin/machi-admin quick-admin-apply /path/to/ast/file RelativeHost
#+END_SRC
... where the last argument, ~RelativeHost~, should be the exact
spelling of one of the previously defined AST ~host~ entities,
*and also* is the same host that the ~machi-admin~ utility is being
executed on.
Restrictions and warnings:
+ This is alpha quality software.
+ There is no "undo".
+ Of course there is, but you need to resort to doing things like
using ~machi attach~ to attach to the server's CLI to then execute
magic Erlang incantations to stop FLUs, unconfigure chains, etc.
+ Oh, and delete some files with magic paths, also.
** Using quick admin to manage multiple machines
A quick sketch follows:
1. Create the AST file to specify all of the changes that you wish to
make to all hosts, FLUs, and/or chains, e.g., ~/tmp/ast.txt~.
2. Check the basic syntax with the ~quick-admin-check~ argument to
~machi-admin~.
3. If the syntax is good, then copy ~/tmp/ast.txt~ to all hosts in the
cluster, using the same path, ~/tmp/ast.txt~.
4. For each machine in the cluster, run:
#+BEGIN_SRC
./rel/machi/bin/machi-admin quick-admin-apply /tmp/ast.txt RelativeHost
#+END_SRC
... where RelativeHost is the AST ~host~ name of the machine that you
are executing the ~machi-admin~ command on. The command should be
successful, with exit status 0 and outputting the string ~ok~.
Finally, for each machine in the cluster, a listing of all files in
the directory ~rel/machi/etc/quick-admin-archive~ should show exactly
the same files, one for each time that ~quick-admin-apply~ has been
run successfully on that machine.
* The "rc.d" style configuration file scheme
This configuration scheme is inspired by BSD UNIX's ~init(8)~ process
manager's configuration style, called "rc.d" after the name of the
directory where these files are stored, ~/etc/rc.d~. The ~init~
process is responsible for (among other things) starting UNIX
processes at machine boot time and stopping them when the machine is
shut down.
The original scheme used by ~init~ to start processes at boot time was
a single Bourne shell script called ~/etc/rc~. When a new software
package was installed that required a daemon to be started at boot
time, text was added to the ~/etc/rc~ file. Uninstalling packages was
much trickier, because it meant removing lines from a file that
*is a computer program (run by the Bourne shell, a Turing-complete
programming language)*. Error-free editing of the ~/etc/rc~ script
was impossible in all cases.
Later, ~init~'s configuration was split into a few master Bourne shell
scripts and a subdirectory, ~/etc/rc.d~. The subdirectory contained
shell scripts that were responsible for boot time starting of a single
daemon or service, e.g. NFS or an HTTP server. When a new software
package was added, a new file was added to the ~rc.d~ subdirectory.
When a package was removed, the corresponding file in ~rc.d~ was
removed. With this simple scheme, addition & removal of boot time
scripts was vastly simplified.
** Riak had a similar configuration file editing problem (and its solution)
Another software product from Basho Technologies, Riak, had a similar
configuration file editing problem. One file in particular,
~app.config~, had a syntax that made it difficult both for human
systems administrators and also computer programs to edit the file in
a syntactically correct manner.
Later releases of Riak switched to an alternative configuration file
format, one inspired by the BSD UNIX ~sysctl(8)~ utility and
~sysctl.conf(5)~ file syntax. The ~sysctl.conf~ format is much easier
to manage by computer programs to add items. Removing items is not
100% simple, however: the correct lines must be identified and then
removed (e.g. with Perl or a text editor or combination of ~grep -v~
and ~mv~), but removing any comment lines that "belong" to the removed
config item(s) is not any easy for a 1-line shell script to do 100%
correctly.
Machi will use the ~sysctl.conf~ style configuration for some
application configuration variables. However, adding & removing FLUs
and chains will be managed using the "rc.d" style because of the
"rc.d" scheme's simplicity and tolerance of mistakes by administrators
(human or computer).
** Machi's "rc.d" file scheme.
Machi will use a single subdirectory that will contain configuration
files for some life cycle management task, e.g. a single FLU or a
single chain.
The contents of the file should be a single Erlang term, serialized in
ASCII form as Erlang source code statement, i.e. a single Erlang term
~T~ that is formatted by ~io:format("~w.",[T]).~. This file must be
parseable by the Erlang function ~file:consult()~.
Later versions of Machi may change the file format to be more familiar
to administrators who are unaccustomed to Erlang language syntax.
** FLU life cycle management using "rc.d" style files
*** The key configuration components of a FLU
1. The machine (or virtual machine) to run it on.
2. The Machi software package's artifacts to execute.
3. The disk device(s) used to store Machi file data & metadata, "rc.d"
style config files, etc.
4. The name, IP address and TCP port assigned to the FLU service.
5. Its chain assignment.
Notes:
+ Items 1-3 are currently outside of the scope of this life cycle
document. We assume that human administrators know how to do these
things.
+ Item 4's properties are explicitly managed by a FLU-defining "rc.d"
style config file.
+ Item 5 is managed by the chain life cycle management system.
Here is an example of a properly formatted FLU config file:
#+BEGIN_SRC
{p_srvr,f1,machi_flu1_client,"192.168.72.23",20401,[]}.
#+END_SRC
... which corresponds to the following Erlang record definition:
#+BEGIN_SRC
-record(p_srvr, {
name :: atom(),
proto_mod = 'machi_flu1_client' :: atom(), % Module name
address :: term(), % Protocol-specific
port :: term(), % Protocol-specific
props = [] :: list() % proplist for other related info
}).
#+END_SRC
+ ~name~ is ~f1~. This is name of the FLU. This name should be
unique over the lifetime of the administrative domain and thus
managed by external policy. This name must be the same as the name
of the config file that defines the FLU.
+ ~proto_mod~ is used for internal management purposes and should be
considered a mandatory constant.
+ ~address~ is "192.168.72.23". The DNS hostname or IP address used
by other servers to communicate with this FLU. This must be a valid
IP address, previously assigned to this machine/VM using the
appropriate operating system-specific procedure.
+ ~port~ is TCP port 20401. The TCP port number that the FLU listens
to for incoming Protocol Buffers-serialized communication. This TCP
port must not be in use (now or in the future) by another Machi FLU
or any other process running on this machine/VM.
+ ~props~ is an Erlang-style property list for specifying additional
configuration options, debugging information, sysadmin comments,
etc.
** Chain life cycle management using "rc.d" style files
Unlike FLUs, chains have a self-management aspect that makes a chain
life cycle different from a single FLU server. Machi's chains are
self-managing, via Humming Consensus; see the
https://github.com/basho/machi/tree/master/doc/ directory for much
more detail about Humming Consensus. After FLUs have received their
initial chain configuration for Humming Consensus, the FLUs will
manage the chain (and each other) by themselves.
However, Humming Consensus does not handle three chain management
problems:
1. Specifying the very first chain configuration,
2. Altering the membership of the chain (i.e. adding/removing FLUs
from the chain),
3. Stopping the chain permanently.
A chain "rc.d" file will only be used to bootstrap a newly-defined FLU
server. It's like a piece of glue information to introduce the new
FLU to the Humming Consensus group that is managing the chain's
dynamic state (e.g. which members are up or down). In all other
respects, chain config files are ignored by life cycle management code.
However, to mimic the life cycle of the FLU server's "rc.d" config
files, a chain "rc.d" files is not deleted until the chain has been
decommissioned (i.e. defined with length=0).
*** The key configuration components of a chain
1. The name of the chain.
2. Consistency mode: eventually consistent or strongly consistent.
3. The membership list of all FLU servers in the chain.
+ Remember, all servers in a single chain will manage full replicas
of the same collection of Machi files.
4. If the chain is defined to use strongly consistent mode, then a
list of "witness servers" may also be defined. See the
[https://github.com/basho/machi/tree/master/doc/] documentation for
more information on witness servers.
+ The witness list must be empty for all chains in eventual
consistency mode.
Here is an example of a properly formatted chain config file:
#+BEGIN_SRC
{chain_def_v1,c1,ap_mode,
[{p_srvr,f1,machi_flu1_client,"localhost",20401,[]},
{p_srvr,f2,machi_flu1_client,"localhost",20402,[]},
{p_srvr,f3,machi_flu1_client,"localhost",20403,[]}],
[],[],[],
[f1,f2,f3],
[],[]}.
#+END_SRC
... which corresponds to the following Erlang record definition:
#+BEGIN_SRC
-record(chain_def_v1, {
name :: atom(), % chain name
mode :: 'ap_mode' | 'cp_mode',
full = [] :: [p_srvr()],
witnesses = [] :: [p_srvr()],
old_full = [] :: [atom()], % guard against some races
old_witnesses=[] :: [atom()], % guard against some races
local_run = [] :: [atom()], % must be tailored to each machine!
local_stop = [] :: [atom()], % must be tailored to each machine!
props = [] :: list() % proplist for other related info
}).
#+END_SRC
+ ~name~ is ~c1~, the name of the chain. This name should be unique
over the lifetime of the administrative domain and thus managed by
external policy. This name must be the same as the name of the
config file that defines the chain.
+ ~mode~ is ~ap_mode~, an internal code symbol for eventual
consistency mode.
+ ~full~ is a list of Erlang ~#p_srvr{}~ records for full-service
members of the chain, i.e., providing Machi file data & metadata
storage services.
+ ~witnesses~ is a list of Erlang ~#p_srvr{}~ records for witness-only
FLU servers, i.e., providing only Humming Consensus service.
+ The next four fields are used for internal management only.
+ ~props~ is an Erlang-style property list for specifying additional
configuration options, debugging information, sysadmin comments,
etc.

Binary file not shown.

View file

@ -1,372 +0,0 @@
# Table of contents
* [Hands-on experiments with Machi and Humming Consensus](#hands-on)
* [Using the network partition simulator and convergence demo test code](#partition-simulator)
<a name="hands-on">
# Hands-on experiments with Machi and Humming Consensus
## Prerequisites
Please refer to the
[Machi development environment prerequisites doc](./dev-prerequisites.md)
for Machi developer environment prerequisites.
If you do not have an Erlang/OTP runtime system available, but you do
have [the Vagrant virtual machine](https://www.vagrantup.com/) manager
available, then please refer to the instructions in the prerequisites
doc for using Vagrant.
<a name="clone-compile">
## Clone and compile the code
Please see the
[Machi 'clone and compile' doc](./dev-clone-compile.md)
for the short list of steps required to fetch the Machi source code
from GitHub and to compile &amp; test Machi.
## Running three Machi instances on a single machine
All of the commands that should be run at your login shell (e.g. Bash,
c-shell) can be cut-and-pasted from this document directly to your
login shell prompt.
Run the following command:
make stagedevrel
This will create a directory structure like this:
|-dev1-|... stand-alone Machi app + subdirectories
|-dev-|-dev2-|... stand-alone Machi app + directories
|-dev3-|... stand-alone Machi app + directories
Each of the `dev/dev1`, `dev/dev2`, and `dev/dev3` are stand-alone
application instances of Machi and can be run independently of each
other on the same machine. This demo will use all three.
The lifecycle management utilities for Machi are a bit immature,
currently. They assume that each Machi server runs on a host with a
unique hostname -- there is no flexibility built-in yet to easily run
multiple Machi instances on the same machine. To continue with the
demo, we need to use `sudo` or `su` to obtain superuser privileges to
edit the `/etc/hosts` file.
Please add the following line to `/etc/hosts`, using this command:
sudo sh -c 'echo "127.0.0.1 machi1 machi2 machi3" >> /etc/hosts'
Next, we will use a shell script to finish setting up our cluster. It
will do the following for us:
* Verify that the new line that was added to `/etc/hosts` is correct.
* Modify the `etc/app.config` files to configure the Humming Consensus
chain manager's actions logged to the `log/console.log` file.
* Start the three application instances.
* Verify that the three instances are running correctly.
* Configure a single chain, with one FLU server per application
instance.
Please run this script using this command:
./priv/humming-consensus-demo.setup.sh
If the output looks like this (and exits with status zero), then the
script was successful.
Step: Verify that the required entries in /etc/hosts are present
Step: add a verbose logging option to app.config
Step: start three three Machi application instances
pong
pong
pong
Step: configure one chain to start a Humming Consensus group with three members
Result: ok
Result: ok
Result: ok
We have now created a single replica chain, called `c1`, that has
three file servers participating in the chain. Thanks to the
hostnames that we added to `/etc/hosts`, all are using the localhost
network interface.
| App instance | Pseudo | FLU name | TCP port |
| directory | Hostname | | number |
|--------------+----------+----------+----------|
| dev1 | machi1 | flu1 | 20401 |
| dev2 | machi2 | flu2 | 20402 |
| dev3 | machi3 | flu3 | 20403 |
The log files for each application instance can be found in the
`./dev/devN/log/console.log` file, where the `N` is the instance
number: 1, 2, or 3.
## Understanding the chain manager's log file output
After running the `./priv/humming-consensus-demo.setup.sh` script,
let's look at the last few lines of the `./dev/dev1/log/console.log`
log file for Erlang VM process #1.
2016-03-09 10:16:35.676 [info] <0.105.0>@machi_lifecycle_mgr:process_pending_flu:422 Started FLU f1 with supervisor pid <0.128.0>
2016-03-09 10:16:35.676 [info] <0.105.0>@machi_lifecycle_mgr:move_to_flu_config:540 Creating FLU config file f1
2016-03-09 10:16:35.790 [info] <0.105.0>@machi_lifecycle_mgr:bootstrap_chain2:312 Configured chain c1 via FLU f1 to mode=ap_mode all=[f1,f2,f3] witnesses=[]
2016-03-09 10:16:35.790 [info] <0.105.0>@machi_lifecycle_mgr:move_to_chain_config:546 Creating chain config file c1
2016-03-09 10:16:44.139 [info] <0.132.0> CONFIRM epoch 1141 <<155,42,7,221>> upi [] rep [] auth f1 by f1
2016-03-09 10:16:44.271 [info] <0.132.0> CONFIRM epoch 1148 <<57,213,154,16>> upi [f1] rep [] auth f1 by f1
2016-03-09 10:16:44.864 [info] <0.132.0> CONFIRM epoch 1151 <<239,29,39,70>> upi [f1] rep [f3] auth f1 by f1
2016-03-09 10:16:45.235 [info] <0.132.0> CONFIRM epoch 1152 <<173,17,66,225>> upi [f2] rep [f1,f3] auth f2 by f1
2016-03-09 10:16:47.343 [info] <0.132.0> CONFIRM epoch 1154 <<154,231,224,149>> upi [f2,f1,f3] rep [] auth f2 by f1
Let's pick apart some of these lines. We have started all three
servers at about the same time. We see some race conditions happen,
and some jostling and readjustment happens pretty quickly in the first
few seconds.
* `Started FLU f1 with supervisor pid <0.128.0>`
* This VM, #1,
started a FLU (Machi data server) with the name `f1`. In the Erlang
process supervisor hierarchy, the process ID of the top supervisor
is `<0.128.0>`.
* `Configured chain c1 via FLU f1 to mode=ap_mode all=[f1,f2,f3] witnesses=[]`
* A bootstrap configuration for a chain named `c1` has been created.
* The FLUs/data servers that are eligible for participation in the
chain have names `f1`, `f2`, and `f3`.
* The chain will operate in eventual consistency mode (`ap_mode`)
* The witness server list is empty. Witness servers are never used
in eventual consistency mode.
* `CONFIRM epoch 1141 <<155,42,7,221>> upi [] rep [] auth f1 by f1`
* All participants in epoch 1141 are unanimous in adopting epoch
1141's projection. All active membership lists are empty, so
there is no functional chain replication yet, at least as far as
server `f1` knows
* The epoch's abbreviated checksum is `<<155,42,7,221>>`.
* The UPI list, i.e. the replicas whose data is 100% in sync is
`[]`, the empty list. (UPI = Update Propagation Invariant)
* The list of servers that are under data repair (`rep`) is also
empty, `[]`.
* This projection was authored by server `f1`.
* The log message was generated by server `f1`.
* `CONFIRM epoch 1148 <<57,213,154,16>> upi [f1] rep [] auth f1 by f1`
* Now the server `f1` has created a chain of length 1, `[f1]`.
* Chain repair/file re-sync is not required when the UPI server list
changes from length 0 -> 1.
* `CONFIRM epoch 1151 <<239,29,39,70>> upi [f1] rep [f3] auth f1 by f1`
* Server `f1` has noticed that server `f3` is alive. Apparently it
has not yet noticed that server `f2` is also running.
* Server `f3` is in the repair list.
* `CONFIRM epoch 1152 <<173,17,66,225>> upi [f2] rep [f1,f3] auth f2 by f1`
* Server `f2` is apparently now aware that all three servers are running.
* The previous configuration used by `f2` was `upi [f2]`, i.e., `f2`
was running in a chain of one. `f2` noticed that `f1` and `f3`
were now available and has started adding them to the chain.
* All new servers are always added to the tail of the chain in the
repair list.
* In eventual consistency mode, a UPI change like this is OK.
* When performing a read, a client must read from both tail of the
UPI list and also from all repairing servers.
* When performing a write, the client writes to both the UPI
server list and also the repairing list, in that order.
* I.e., the client concatenates both lists,
`UPI ++ Repairing`, for its chain configuration for the write.
* Server `f2` will trigger file repair/re-sync shortly.
* The waiting time for starting repair has been configured to be
extremely short, 1 second. The default waiting time is 10
seconds, in case Humming Consensus remains unstable.
* `CONFIRM epoch 1154 <<154,231,224,149>> upi [f2,f1,f3] rep [] auth f2 by f1`
* File repair/re-sync has finished. All file data on all servers
are now in sync.
* The UPI/in-sync part of the chain is now `[f2,f1,f3]`, and there
are no servers under repair.
## Let's create some failures
Here are some suggestions for creating failures.
* Use the `./dev/devN/bin/machi stop` and `./dev/devN/bin/machi start`
commands to stop & start VM #`N`.
* Stop a VM abnormally by using `kill`. The OS process name to look
for is `beam.smp`.
* Suspend and resume a VM, using the `SIGSTOP` and `SIGCONT` signals.
* E.g. `kill -STOP 9823` and `kill -CONT 9823`
The network partition simulator is not (yet) available when running
Machi in this mode. Please see the next section for instructions on
how to use partition simulator.
<a name="partition-simulator">
# Using the network partition simulator and convergence demo test code
This is the demo code mentioned in the presentation that Scott Lystig
Fritchie gave at the
[RICON 2015 conference](http://ricon.io).
* [slides (PDF format)](http://ricon.io/speakers/slides/Scott_Fritchie_Ricon_2015.pdf)
* [video](https://www.youtube.com/watch?v=yR5kHL1bu1Q)
## A complete example of all input and output
If you don't have an Erlang/OTP 17 runtime environment available,
please see this file for full input and output of a strong consistency
length=3 chain test:
https://gist.github.com/slfritchie/8352efc88cc18e62c72c
This file contains all commands input and all simulator output from a
sample run of the simulator.
To help interpret the output of the test, please skip ahead to the
"The test output is very verbose" section.
## Prerequisites
If you don't have `git` and/or the Erlang 17 runtime system available
on your OS X, FreeBSD, Linux, or Solaris machine, please take a look
at the [Prerequisites section](#prerequisites) first. When you have
installed the prerequisite software, please return back here.
## Clone and compile the code
Please briefly visit the [Clone and compile the code](#clone-compile)
section. When finished, please return back here.
## Run an interactive Erlang CLI shell
Run the following command at your login shell:
erl -pz .eunit ebin deps/*/ebin
If you are using Erlang/OTP version 17, you should see some CLI output
that looks like this:
Erlang/OTP 17 [erts-6.4] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]
Eshell V6.4 (abort with ^G)
1>
## The test output is very verbose ... what are the important parts?
The output of the Erlang command
`machi_chain_manager1_converge_demo:help()` will display the following
guide to the output of the tests.
A visualization of the convergence behavior of the chain self-management
algorithm for Machi.
1. Set up some server and chain manager pairs.
2. Create a number of different network partition scenarios, where
(simulated) partitions may be symmetric or asymmetric. Then stop changing
the partitions and keep the simulated network stable (and perhaps broken).
3. Run a number of iterations of the algorithm in parallel by poking each
of the manager processes on a random'ish basis.
4. Afterward, fetch the chain transition changes made by each FLU and
verify that no transition was unsafe.
During the iteration periods, the following is a cheatsheet for the output.
See the internal source for interpreting the rest of the output.
'SET partitions = '
A pair-wise list of actors which cannot send messages. The
list is uni-directional. If there are three servers (a,b,c),
and if the partitions list is '[{a,b},{b,c}]' then all
messages from a->b and b->c will be dropped, but any other
sender->recipient messages will be delivered successfully.
'x uses:'
The FLU x has made an internal state transition and is using
this epoch's projection as operating chain configuration. The
rest of the line is a summary of the projection.
'CONFIRM epoch {N}'
This message confirms that all of the servers listed in the
UPI and repairing lists of the projection at epoch {N} have
agreed to use this projection because they all have written
this projection to their respective private projection stores.
The chain is now usable by/available to all clients.
'Sweet, private projections are stable'
This report announces that this iteration of the test cycle
has passed successfully. The report that follows briefly
summarizes the latest private projection used by each
participating server. For example, when in strong consistency
mode with 'a' as a witness and 'b' and 'c' as real servers:
%% Legend:
%% server name, epoch ID, UPI list, repairing list, down list, ...
%% ... witness list, 'false' (a constant value)
[{a,{{1116,<<23,143,246,55>>},[a,b],[],[c],[a],false}},
{b,{{1116,<<23,143,246,55>>},[a,b],[],[c],[a],false}}]
Both servers 'a' and 'b' agree on epoch 1116 with epoch ID
{1116,<<23,143,246,55>>} where UPI=[a,b], repairing=[],
down=[c], and witnesses=[a].
Server 'c' is not shown because 'c' has wedged itself OOS (out
of service) by configuring a chain length of zero.
If no servers are listed in the report (i.e. only '[]' is
displayed), then all servers have wedged themselves OOS, and
the chain is unavailable.
'DoIt,'
This marks a group of tick events which trigger the manager
processes to evaluate their environment and perhaps make a
state transition.
A long chain of 'DoIt,DoIt,DoIt,' means that the chain state has
(probably) settled to a stable configuration, which is the goal of the
algorithm.
Press control-c to interrupt the test....".
## Run a test in eventual consistency mode
Run the following command at the Erlang CLI prompt:
machi_chain_manager1_converge_demo:t(3, [{private_write_verbose,true}]).
The first argument, `3`, is the number of servers to participate in
the chain. Please note:
* Chain lengths as short as 1 or 2 are valid, but the results are a
bit boring.
* Chain lengths as long as 7 or 9 can be used, but they may
suffer from longer periods of churn/instability before all chain
managers reach agreement via humming consensus. (It is future work
to shorten the worst of the unstable churn latencies.)
* In eventual consistency mode, chain lengths may be even numbers,
e.g. 2, 4, or 6.
* The simulator will choose partition events from the permutations of
all 1, 2, and 3 node partition pairs. The total runtime will
increase *dramatically* with chain length.
* Chain length 2: about 3 partition cases
* Chain length 3: about 35 partition cases
* Chain length 4: about 230 partition cases
* Chain length 5: about 1100 partition cases
## Run a test in strong consistency mode (with witnesses):
*NOTE:* Due to a bug in the test code, please do not try to run the
convergence test in strong consistency mode and also without the
correct minority number of witness servers! If in doubt, please run
the commands shown below exactly.
Run the following command at the Erlang CLI prompt:
machi_chain_manager1_converge_demo:t(3, [{private_write_verbose,true}, {consistency_mode, cp_mode}, {witnesses, [a]}]).
The first argument, `3`, is the number of servers to participate in
the chain. Chain lengths as long as 7 or 9 can be used, but they may
suffer from longer periods of churn/instability before all chain
managers reach agreement via humming consensus.
Due to the bug mentioned above, please use the following
commands when running with chain lengths of 5 or 7, respectively.
machi_chain_manager1_converge_demo:t(5, [{private_write_verbose,true}, {consistency_mode, cp_mode}, {witnesses, [a,b]}]).
machi_chain_manager1_converge_demo:t(7, [{private_write_verbose,true}, {consistency_mode, cp_mode}, {witnesses, [a,b,c]}]).

170
doc/overview.edoc Normal file
View file

@ -0,0 +1,170 @@
@title Machi: a small village of replicated files
@doc
== About This EDoc Documentation ==
This EDoc-style documentation will concern itself only with Erlang
function APIs and function &amp; data types. Higher-level design and
commentary will remain outside of the Erlang EDoc system; please see
the "Pointers to Other Machi Documentation" section below for more
details.
Readers should beware that this documentation may be out-of-sync with
the source code. When in doubt, use the `make edoc' command to
regenerate all HTML pages.
It is the developer's responsibility to re-generate the documentation
periodically and commit it to the Git repo.
== Machi Code Overview ==
=== Chain Manager ===
The Chain Manager is responsible for managing the state of Machi's
"Chain Replication" state. This role is roughly analogous to the
"Riak Core" application inside of Riak, which takes care of
coordinating replica placement and replica repair.
For each primitive data server in the cluster, a Machi FLU, there is a
Chain Manager process that manages its FLU's role within the Machi
cluster's Chain Replication scheme. Each Chain Manager process
executes locally and independently to manage the distributed state of
a single Machi Chain Replication chain.
<ul>
<li> To contrast with Riak Core ... Riak Core's claimant process is
solely responsible for managing certain critical aspects of
Riak Core distributed state. Machi's Chain Manager process
performs similar tasks as Riak Core's claimant. However, Machi
has several active Chain Manager processes, one per FLU server,
instead of a single active process like Core's claimant. Each
Chain Manager process acts independently; each is constrained
so that it will reach consensus via independent computation
&amp; action.
Full discussion of this distributed consensus is outside the
scope of this document; see the "Pointers to Other Machi
Documentation" section below for more information.
</li>
<li> Machi differs from a Riak Core application because Machi's
replica placement policy is simply, "All Machi servers store
replicas of all Machi files".
Machi is intended to be a primitive building block for creating larger
cluster-of-clusters where files are
distributed/fragmented/sharded across a large pool of
independent Machi clusters.
</li>
<li> See
[https://www.usenix.org/legacy/events/osdi04/tech/renesse.html]
for a copy of the paper, "Chain Replication for Supporting High
Throughput and Availability" by Robbert van Renesse and Fred
B. Schneider.
</li>
</ul>
=== FLU ===
The FLU is the basic storage server for Machi.
<ul>
<li> The name FLU is taken from "flash storage unit" from the paper
"CORFU: A Shared Log Design for Flash Clusters" by
Balakrishnan, Malkhi, Prabhakaran, and Wobber. See
[https://www.usenix.org/conference/nsdi12/technical-sessions/presentation/balakrishnan]
</li>
<li> In CORFU, the sequencer step is a prerequisite step that is
performed by a separate component, the Sequencer.
In Machi, the `append_chunk()' protocol message has
an implicit "sequencer" operation applied by the "head" of the
Machi Chain Replication chain. If a client wishes to write
data that has already been assigned a sequencer position, then
the `write_chunk()' API function is used.
</li>
</ul>
For each FLU, there are three independent tasks that are implemented
using three different Erlang processes:
<ul>
<li> A FLU server, implemented primarily by `machi_flu.erl'.
</li>
<li> A projection store server, implemented primarily by
`machi_projection_store.erl'.
</li>
<li> A chain state manager server, implemented primarily by
`machi_chain_manager1.erl'.
</li>
</ul>
From the perspective of failure detection, it is very convenient that
all three FLU-related services (file server, sequencer server, and
projection server) are accessed using the same single TCP port.
=== Projection (data structure) ===
The projection is a data structure that specifies the current state
of the Machi cluster: all FLUs, which FLUS are considered
up/running or down/crashed/stopped, which FLUs are actively
participants in the Chain Replication protocol, and which FLUs are
under "repair" (i.e., having their data resyncronized when
newly-added to a cluster or when restarting after a crash).
=== Projection Store (server) ===
The projection store is a storage service that is implemented by an
Erlang/OTP `gen_server' process that is associated with each
FLU. Conceptually, the projection store is an array of
write-once registers. For each projection store register, the
key is a 2-tuple of an epoch number (`non_neg_integer()' type)
and a projection type (`public' or `private' type); the value is
a projection data structure (`projection_v1()' type).
=== Client and Proxy Client ===
Machi is intentionally avoiding using distributed Erlang for Machi's
communication. This design decision makes Erlang-side code more
difficult &amp; complex but allows us the freedom of implementing
parts of Machi in other languages without major
protocol&amp;API&amp;glue code changes later in the product's
lifetime.
There are two layers of interface for Machi clients.
<ul>
<li> The `machi_flu1_client' module implements an API that uses a
TCP socket directly.
</li>
<li> The `machi_proxy_flu1_client' module implements an API that
uses a local, long-lived `gen_server' process as a proxy for
the remote, perhaps disconnected-or-crashed Machi FLU server.
</li>
</ul>
The types for both modules ought to be the same. However, due to
rapid code churn, some differences might exist. Any major difference
is (almost by definition) a bug: please open a GitHub issue to request
a correction.
== TODO notes ==
Any use of the string "TODO" in upper/lower/mixed case, anywhere in
the code, is a reminder signal of unfinished work.
== Pointers to Other Machi Documentation ==
<ul>
<li> If you are viewing this document locally, please look in the
`../doc/' directory,
</li>
<li> If you are viewing this document via the Web, please find the
documentation via this link:
[http://github.com/basho/machi/tree/master/doc/]
Please be aware that this link points to the `master' branch
of the Machi source repository and therefore may be
out-of-sync with non-`master' branch code.
</li>
</ul>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 115 KiB

View file

@ -23,8 +23,8 @@
\copyrightdata{978-1-nnnn-nnnn-n/yy/mm} \copyrightdata{978-1-nnnn-nnnn-n/yy/mm}
\doi{nnnnnnn.nnnnnnn} \doi{nnnnnnn.nnnnnnn}
\titlebanner{Draft \#0.92, October 2015} \titlebanner{Draft \#0.91, June 2015}
\preprintfooter{Draft \#0.92, October 2015} \preprintfooter{Draft \#0.91, June 2015}
\title{Chain Replication metadata management in Machi, an immutable \title{Chain Replication metadata management in Machi, an immutable
file store} file store}
@ -50,23 +50,19 @@ For an overview of the design of the larger Machi system, please see
TODO Fix, after all of the recent changes to this document. TODO Fix, after all of the recent changes to this document.
Machi is an immutable file store, now in active development by Basho Machi is an immutable file store, now in active development by Basho
Japan KK. Machi uses Chain Replication\footnote{Chain Japan KK. Machi uses Chain Replication to maintain strong consistency
of file updates to all replica servers in a Machi cluster. Chain
Replication is a variation of primary/backup replication where the Replication is a variation of primary/backup replication where the
order of updates between the primary server and each of the backup order of updates between the primary server and each of the backup
servers is strictly ordered into a single ``chain''.} servers is strictly ordered into a single ``chain''. Management of
to maintain strong consistency Chain Replication's metadata, e.g., ``What is the current order of
of file updates to all replica servers in a Machi cluster.
This document describes the Machi chain manager, the component
responsible for managing Chain Replication metadata state.
Management of
chain metadata, e.g., ``What is the current order of
servers in the chain?'', remains an open research problem. The servers in the chain?'', remains an open research problem. The
current state of the art for Chain Replication metadata management current state of the art for Chain Replication metadata management
relies on an external oracle (e.g., based on ZooKeeper) or the Elastic relies on an external oracle (e.g., ZooKeeper) or the Elastic
Replication \cite{elastic-chain-replication} algorithm. Replication algorithm.
The chain This document describes the Machi chain manager, the component
responsible for managing Chain Replication metadata state. The chain
manager uses a new technique, based on a variation of CORFU, called manager uses a new technique, based on a variation of CORFU, called
``humming consensus''. ``humming consensus''.
Humming consensus does not require active participation by all or even Humming consensus does not require active participation by all or even
@ -93,18 +89,20 @@ to perform these management tasks. Chain metadata state and state
management tasks include: management tasks include:
\begin{itemize} \begin{itemize}
\item Preserving data integrity of all metadata and data stored within
the chain. Data loss is not an option.
\item Preserving stable knowledge of chain membership (i.e. all nodes in \item Preserving stable knowledge of chain membership (i.e. all nodes in
the chain, regardless of operational status). We expect that a systems the chain, regardless of operational status). A systems
administrator will make all ``permanent'' decisions about administrator is expected to make ``permanent'' decisions about
chain membership. chain membership.
\item Using passive and/or active techniques to track operational \item Using passive and/or active techniques to track operational
state/status, e.g., up, down, restarting, full data sync in progress, partial state/status, e.g., up, down, restarting, full data sync, partial
data sync in progress, etc. data sync, etc.
\item Choosing the run-time replica ordering/state of the chain, based on \item Choosing the run-time replica ordering/state of the chain, based on
current member status and past operational history. All chain current member status and past operational history. All chain
state transitions must be done safely and without data loss or state transitions must be done safely and without data loss or
corruption. corruption.
\item When a new node is added to the chain administratively or old node is \item As a new node is added to the chain administratively or old node is
restarted, adding the node to the chain safely and perform any data restarted, adding the node to the chain safely and perform any data
synchronization/repair required to bring the node's data into synchronization/repair required to bring the node's data into
full synchronization with the other nodes. full synchronization with the other nodes.
@ -113,27 +111,39 @@ management tasks include:
\subsection{Ultimate goal: Preserve data integrity of Chain Replicated data} \subsection{Ultimate goal: Preserve data integrity of Chain Replicated data}
Preservation of data integrity is paramount to any chain state Preservation of data integrity is paramount to any chain state
management technique for Machi. Loss or corruption of chain data must management technique for Machi. Even when operating in an eventually
be avoided. consistent mode, Machi must not lose data without cause outside of all
design, e.g., all particpants crash permanently.
\subsection{Goal: Contribute to Chain Replication metadata management research} \subsection{Goal: Contribute to Chain Replication metadata management research}
We believe that this new self-management algorithm, humming consensus, We believe that this new self-management algorithm, humming consensus,
contributes a novel approach to Chain Replication metadata management. contributes a novel approach to Chain Replication metadata management.
The ``monitor
and mangage your neighbor'' technique proposed in Elastic Replication
(Section \ref{ssec:elastic-replication}) appears to be the current
state of the art in the distributed systems research community.
Typical practice in the IT industry appears to favor using an external Typical practice in the IT industry appears to favor using an external
oracle, e.g., built on top of ZooKeeper as a trusted coordinator. oracle, e.g., using ZooKeeper as a trusted coordinator.
See Section~\ref{sec:cr-management-review} for a brief review of See Section~\ref{sec:cr-management-review} for a brief review.
techniques used today.
\subsection{Goal: Support both eventually consistent \& strongly consistent modes of operation} \subsection{Goal: Support both eventually consistent \& strongly consistent modes of operation}
Chain Replication was originally designed by van Renesse and Schneider Machi's first use cases are all for use as a file store in an eventually
\cite{chain-replication} for applications that require strong consistent environment.
consistency, e.g. sequential consistency. However, Machi has use In eventually consistent mode, humming consensus
cases where more relaxed eventual consistency semantics are allows a Machi cluster to fragment into
sufficient. We wish to use the same Chain Replication management arbitrary islands of network partition, all the way down to 100\% of
technique for both strong and eventual consistency environments. members running in complete network isolation from each other.
Furthermore, it provides enough agreement to allow
formerly-partitioned members to coordinate the reintegration and
reconciliation of their data when partitions are healed.
Later, we wish the option of supporting strong consistency
applications such as CORFU-style logging while reusing all (or most)
of Machi's infrastructure. Such strongly consistent operation is the
main focus of this document.
\subsection{Anti-goal: Minimize churn} \subsection{Anti-goal: Minimize churn}
@ -194,18 +204,6 @@ would probably be preferable to add the feature to Riak Ensemble
rather than to use ZooKeeper (and for Basho to document ZK, package rather than to use ZooKeeper (and for Basho to document ZK, package
ZK, provide commercial ZK support, etc.). ZK, provide commercial ZK support, etc.).
\subsection{An external management oracle, implemented by
active/standby application failover}
This technique has been used in production of HibariDB. The customer
very carefully deployed the oracle using the Erlang/OTP ``application
controller'' on two machines to provide active/standby failover of the
management oracle. The customer was willing to monitor this service
very closely and was prepared to intervene manually during network
partitions. (This controller is very susceptible to ``split brain
syndrome''.) While this feature of Erlang/OTP is useful in other
environments, we believe is it not sufficient for Machi's needs.
\section{Assumptions} \section{Assumptions}
\label{sec:assumptions} \label{sec:assumptions}
@ -214,8 +212,8 @@ Paxos, Raft, et al.), why bother with a slightly different set of
assumptions and a slightly different protocol? assumptions and a slightly different protocol?
The answer lies in one of our explicit goals: to have an option of The answer lies in one of our explicit goals: to have an option of
running in an ``eventually consistent'' manner. We wish to be running in an ``eventually consistent'' manner. We wish to be able to
remain available, even if we are make progress, i.e., remain available in the CAP sense, even if we are
partitioned down to a single isolated node. VR, Paxos, and Raft partitioned down to a single isolated node. VR, Paxos, and Raft
alone are not sufficient to coordinate service availability at such alone are not sufficient to coordinate service availability at such
small scale. The humming consensus algorithm can manage small scale. The humming consensus algorithm can manage
@ -249,15 +247,13 @@ synchronized by NTP.
The protocol and algorithm presented here do not specify or require any The protocol and algorithm presented here do not specify or require any
timestamps, physical or logical. Any mention of time inside of data timestamps, physical or logical. Any mention of time inside of data
structures are for human and/or diagnostic purposes only. structures are for human/historic/diagnostic purposes only.
Having said that, some notion of physical time is suggested Having said that, some notion of physical time is suggested for
occasionally for purposes of efficiency. It's recommended that there be some ``sleep
purposes of efficiency. For example, some ``sleep
time'' between iterations of the algorithm: there is no need to ``busy time'' between iterations of the algorithm: there is no need to ``busy
wait'' by executing the algorithm as many times per minute as wait'' by executing the algorithm as quickly as possible. See also
possible. Section~\ref{ssub:when-to-calc}.
See also Section~\ref{ssub:when-to-calc}.
\subsection{Failure detector model} \subsection{Failure detector model}
@ -280,73 +276,55 @@ eventual consistency. Discussion of strongly consistent CP
mode is always the default; exploration of AP mode features in this document mode is always the default; exploration of AP mode features in this document
will always be explictly noted. will always be explictly noted.
%%\subsection{Use of the ``wedge state''} \subsection{Use of the ``wedge state''}
%%
%%A participant in Chain Replication will enter ``wedge state'', as
%%described by the Machi high level design \cite{machi-design} and by CORFU,
%%when it receives information that
%%a newer projection (i.e., run-time chain state reconfiguration) is
%%available. The new projection may be created by a system
%%administrator or calculated by the self-management algorithm.
%%Notification may arrive via the projection store API or via the file
%%I/O API.
%%
%%When in wedge state, the server will refuse all file write I/O API
%%requests until the self-management algorithm has determined that
%%humming consensus has been decided (see next bullet item). The server
%%may also refuse file read I/O API requests, depending on its CP/AP
%%operation mode.
%%
%%\subsection{Use of ``humming consensus''}
%%
%%CS literature uses the word ``consensus'' in the context of the problem
%%description at \cite{wikipedia-consensus}
%%.
%%This traditional definition differs from what is described here as
%%``humming consensus''.
%%
%%``Humming consensus'' describes
%%consensus that is derived only from data that is visible/known at the current
%%time.
%%The algorithm will calculate
%%a rough consensus despite not having input from a quorum majority
%%of chain members. Humming consensus may proceed to make a
%%decision based on data from only a single participant, i.e., only the local
%%node.
%%
%%See Section~\ref{sec:humming-consensus} for detailed discussion.
%%\subsection{Concurrent chain managers execute humming consensus independently} A participant in Chain Replication will enter ``wedge state'', as
%% described by the Machi high level design \cite{machi-design} and by CORFU,
%%Each Machi file server has its own concurrent chain manager when it receives information that
%%process embedded within it. Each chain manager process will a newer projection (i.e., run-time chain state reconfiguration) is
%%execute the humming consensus algorithm using only local state (e.g., available. The new projection may be created by a system
%%the $P_{current}$ projection currently used by the local server) and administrator or calculated by the self-management algorithm.
%%values observed in everyone's projection stores Notification may arrive via the projection store API or via the file
%%(Section~\ref{sec:projection-store}). I/O API.
%%
%%The chain manager communicates with the local Machi
%%file server using the wedge and un-wedge request API. When humming
%%consensus has chosen a projection $P_{new}$ to replace $P_{current}$,
%%the value of $P_{new}$ is included in the un-wedge request.
\subsection{The reader is familiar with CORFU} When in wedge state, the server will refuse all file write I/O API
requests until the self-management algorithm has determined that
humming consensus has been decided (see next bullet item). The server
may also refuse file read I/O API requests, depending on its CP/AP
operation mode.
Machi borrows heavily from the techniques and data structures used by \subsection{Use of ``humming consensus''}
CORFU \cite[corfu1],\cite[corfu2]. We hope that the reader is
familiar with CORFU's features, including:
\begin{itemize} CS literature uses the word ``consensus'' in the context of the problem
\item write-once registers for log data storage, description at \cite{wikipedia-consensus}
\item the epoch, which defines a period of time when a cluster's configuration .
is stable, This traditional definition differs from what is described here as
\item strictly increasing epoch numbers, which are identifiers ``humming consensus''.
for particular epochs,
\item projections, which define the chain order and other details of ``Humming consensus'' describes
data replication within the cluster, and consensus that is derived only from data that is visible/known at the current
\item the wedge state, used by servers to coordinate cluster changes time.
during epoch transitions. The algorithm will calculate
\end{itemize} a rough consensus despite not having input from all/majority
of chain members. Humming consensus may proceed to make a
decision based on data from only a single participant, i.e., only the local
node.
See Section~\ref{sec:humming-consensus} for detailed discussion.
\subsection{Concurrent chain managers execute humming consensus independently}
Each Machi file server has its own concurrent chain manager
process embedded within it. Each chain manager process will
execute the humming consensus algorithm using only local state (e.g.,
the $P_{current}$ projection currently used by the local server) and
values observed in everyone's projection stores
(Section~\ref{sec:projection-store}).
The chain manager communicates with the local Machi
file server using the wedge and un-wedge request API. When humming
consensus has chosen a projection $P_{new}$ to replace $P_{current}$,
the value of $P_{new}$ is included in the un-wedge request.
\section{The projection store} \section{The projection store}
\label{sec:projection-store} \label{sec:projection-store}
@ -365,15 +343,19 @@ this key. The
store's value is either the special `unwritten' value\footnote{We use store's value is either the special `unwritten' value\footnote{We use
$\bot$ to denote the unwritten value.} or else a binary blob that is $\bot$ to denote the unwritten value.} or else a binary blob that is
immutable thereafter; the projection data structure is immutable thereafter; the projection data structure is
serialized and stored in this binary blob. See serialized and stored in this binary blob.
\ref{sub:the-projection} for more detail.
The projection store is vital for the correct implementation of humming
consensus (Section~\ref{sec:humming-consensus}). The write-once
register primitive allows us to reason about the store's behavior
using the same logical tools and techniques as the CORFU ordered log.
\subsection{The publicly-writable half of the projection store} \subsection{The publicly-writable half of the projection store}
The publicly-writable projection store is used to share information The publicly-writable projection store is used to share information
during the first half of humming consensus algorithm. Projections during the first half of humming consensus algorithm. Projections
in the public half of the store form a log of in the public half of the store form a log of
suggestions\footnote{I hesitate to use the words ``propose'' or ``proposal'' suggestions\footnote{I hesitate to use the word ``propose'' or ``proposal''
anywhere in this document \ldots until I've done a more formal anywhere in this document \ldots until I've done a more formal
analysis of the protocol. Those words have too many connotations in analysis of the protocol. Those words have too many connotations in
the context of consensus protocols such as Paxos and Raft.} the context of consensus protocols such as Paxos and Raft.}
@ -387,9 +369,8 @@ Any chain member may read from the public half of the store.
The privately-writable projection store is used to store the The privately-writable projection store is used to store the
Chain Replication metadata state (as chosen by humming consensus) Chain Replication metadata state (as chosen by humming consensus)
that is in use now by the local Machi server. Earlier projections that is in use now by the local Machi server as well as previous
remain in the private half to keep a historical operation states.
record of chain state transitions by the local server.
Only the local server may write values into the private half of store. Only the local server may write values into the private half of store.
Any chain member may read from the private half of the store. Any chain member may read from the private half of the store.
@ -405,30 +386,35 @@ The private projection store serves multiple purposes, including:
its sequence of $P_{current}$ projection changes. its sequence of $P_{current}$ projection changes.
\end{itemize} \end{itemize}
The private half of the projection store is not replicated.
\section{Projections: calculation, storage, and use} \section{Projections: calculation, storage, and use}
\label{sec:projections} \label{sec:projections}
Machi uses a ``projection'' to determine how its Chain Replication replicas Machi uses a ``projection'' to determine how its Chain Replication replicas
should operate; see \cite{machi-design} and \cite{corfu1}. should operate; see \cite{machi-design} and
\cite{corfu1}. At runtime, a cluster must be able to respond both to
administrative changes (e.g., substituting a failed server with
replacement hardware) as well as local network conditions (e.g., is
there a network partition?).
The projection defines the operational state of Chain Replication's
chain order as well the (re-)synchronization of data managed by by
newly-added/failed-and-now-recovering members of the chain. This
chain metadata, together with computational processes that manage the
chain, must be managed in a safe manner in order to avoid unintended
data loss of data managed by the chain.
The concept of a projection is borrowed The concept of a projection is borrowed
from CORFU but has a longer history, e.g., the Hibari key-value store from CORFU but has a longer history, e.g., the Hibari key-value store
\cite{cr-theory-and-practice} and goes back in research for decades, \cite{cr-theory-and-practice} and goes back in research for decades,
e.g., Porcupine \cite{porcupine}. e.g., Porcupine \cite{porcupine}.
The projection defines the operational state of Chain Replication's
chain order as well the (re-)synchronization of data managed by by
newly-added/failed-and-now-recovering members of the chain.
At runtime, a cluster must be able to respond both to
administrative changes (e.g., substituting a failed server with
replacement hardware) as well as local network conditions (e.g., is
there a network partition?).
\subsection{The projection data structure} \subsection{The projection data structure}
\label{sub:the-projection} \label{sub:the-projection}
{\bf NOTE:} This section is a duplicate of the ``The Projection and {\bf NOTE:} This section is a duplicate of the ``The Projection and
the Projection Epoch Number'' section of the ``Machi: an immutable the Projection Epoch Number'' section of \cite{machi-design}.
file store'' design doc \cite{machi-design}.
The projection data The projection data
structure defines the current administration \& operational/runtime structure defines the current administration \& operational/runtime
@ -459,7 +445,6 @@ Figure~\ref{fig:projection}. To summarize the major components:
active_upi :: [m_server()], active_upi :: [m_server()],
repairing :: [m_server()], repairing :: [m_server()],
down_members :: [m_server()], down_members :: [m_server()],
witness_servers :: [m_server()],
dbg_annotations :: proplist() dbg_annotations :: proplist()
}). }).
\end{verbatim} \end{verbatim}
@ -469,12 +454,13 @@ Figure~\ref{fig:projection}. To summarize the major components:
\begin{itemize} \begin{itemize}
\item {\tt epoch\_number} and {\tt epoch\_csum} The epoch number and \item {\tt epoch\_number} and {\tt epoch\_csum} The epoch number and
projection checksum together form the unique identifier for this projection. projection checksum are unique identifiers for this projection.
\item {\tt creation\_time} Wall-clock time, useful for humans and \item {\tt creation\_time} Wall-clock time, useful for humans and
general debugging effort. general debugging effort.
\item {\tt author\_server} Name of the server that calculated the projection. \item {\tt author\_server} Name of the server that calculated the projection.
\item {\tt all\_members} All servers in the chain, regardless of current \item {\tt all\_members} All servers in the chain, regardless of current
operation status. operation status. If all operating conditions are perfect, the
chain should operate in the order specified here.
\item {\tt active\_upi} All active chain members that we know are \item {\tt active\_upi} All active chain members that we know are
fully repaired/in-sync with each other and therefore the Update fully repaired/in-sync with each other and therefore the Update
Propagation Invariant (Section~\ref{sub:upi}) is always true. Propagation Invariant (Section~\ref{sub:upi}) is always true.
@ -482,10 +468,7 @@ Figure~\ref{fig:projection}. To summarize the major components:
are in active data repair procedures. are in active data repair procedures.
\item {\tt down\_members} All members that the {\tt author\_server} \item {\tt down\_members} All members that the {\tt author\_server}
believes are currently down or partitioned. believes are currently down or partitioned.
\item {\tt witness\_servers} If witness servers (Section~\ref{zzz}) \item {\tt dbg\_annotations} A ``kitchen sink'' proplist, for code to
are used in strong consistency mode, then they are listed here. The
set of {\tt witness\_servers} is a subset of {\tt all\_members}.
\item {\tt dbg\_annotations} A ``kitchen sink'' property list, for code to
add any hints for why the projection change was made, delay/retry add any hints for why the projection change was made, delay/retry
information, etc. information, etc.
\end{itemize} \end{itemize}
@ -495,8 +478,7 @@ Figure~\ref{fig:projection}. To summarize the major components:
According to the CORFU research papers, if a server node $S$ or client According to the CORFU research papers, if a server node $S$ or client
node $C$ believes that epoch $E$ is the latest epoch, then any information node $C$ believes that epoch $E$ is the latest epoch, then any information
that $S$ or $C$ receives from any source that an epoch $E+\delta$ (where that $S$ or $C$ receives from any source that an epoch $E+\delta$ (where
$\delta > 0$) exists will push $S$ into the ``wedge'' state $\delta > 0$) exists will push $S$ into the ``wedge'' state and $C$ into a mode
and force $C$ into a mode
of searching for the projection definition for the newest epoch. of searching for the projection definition for the newest epoch.
In the humming consensus description in In the humming consensus description in
@ -524,7 +506,7 @@ Humming consensus requires that any projection be identified by both
the epoch number and the projection checksum, as described in the epoch number and the projection checksum, as described in
Section~\ref{sub:the-projection}. Section~\ref{sub:the-projection}.
\section{Managing projection store replicas} \section{Managing multiple projection store replicas}
\label{sec:managing-multiple-projection-stores} \label{sec:managing-multiple-projection-stores}
An independent replica management technique very similar to the style An independent replica management technique very similar to the style
@ -533,63 +515,11 @@ replicas of Machi's projection data structures.
The major difference is that humming consensus The major difference is that humming consensus
{\em does not necessarily require} {\em does not necessarily require}
successful return status from a minimum number of participants (e.g., successful return status from a minimum number of participants (e.g.,
a majority quorum). a quorum).
\subsection{Writing to public projection stores}
\label{sub:proj-store-writing}
Writing replicas of a projection $P_{new}$ to the cluster's public
projection stores is similar to writing in a Dynamo-like system.
The significant difference with Chain Replication is how we interpret
the return status of each write operation.
In cases of {\tt error\_written} status,
the process may be aborted and read repair
triggered. The most common reason for {\tt error\_written} status
is that another actor in the system has concurrently
already calculated another
(perhaps different\footnote{The {\tt error\_written} may also
indicate that another server has performed read repair on the exact
projection $P_{new}$ that the local server is trying to write!})
projection using the same projection epoch number.
\subsection{Writing to private projection stores}
Only the local server/owner may write to the private half of a
projection store. Private projection store values are never subject
to read repair.
\subsection{Reading from public projection stores}
\label{sub:proj-store-reading}
A read is simple: for an epoch $E$, send a public projection read API
operation to all participants. Usually, the ``get latest epoch''
variety is used.
The minimum number of non-error responses is only one.\footnote{The local
projection store should always be available, even if no other remote
replica projection stores are available.} If all available servers
return a single, unanimous value $V_u, V_u \ne \bot$, then $V_u$ is
the final result for epoch $E$.
Any non-unanimous values are considered unresolvable for the
epoch. This disagreement is resolved by newer
writes to the public projection stores during subsequent iterations of
humming consensus.
Unavailable servers may not necessarily interfere with making a decision.
Humming consensus
only uses as many public projections as are available at the present
moment of time. Assume that some server $S$ is unavailable at time $t$ and
becomes available at some later $t+\delta$.
If at $t+\delta$ we
discover that $S$'s public projection store for key $E$
contains some disagreeing value $V_{weird}$, then the disagreement
will be resolved in the exact same manner that would have been used as if we
had seen the disagreeing values at the earlier time $t$.
\subsection{Read repair: repair only unwritten values} \subsection{Read repair: repair only unwritten values}
The ``read repair'' concept is also shared with Riak Core and Dynamo The idea of ``read repair'' is also shared with Riak Core and Dynamo
systems. However, Machi has situations where read repair cannot truly systems. However, Machi has situations where read repair cannot truly
``fix'' a key because two different values have been written by two ``fix'' a key because two different values have been written by two
different replicas. different replicas.
@ -600,24 +530,85 @@ values, all participants in humming consensus merely agree that there
were multiple suggestions at that epoch which must be resolved by the were multiple suggestions at that epoch which must be resolved by the
creation and writing of newer projections with later epoch numbers.} creation and writing of newer projections with later epoch numbers.}
Machi's projection store read repair can only repair values that are Machi's projection store read repair can only repair values that are
unwritten, i.e., currently storing $\bot$. unwritten, i.e., storing $\bot$.
The value used to repair unwritten $\bot$ values is the ``best'' projection that The value used to repair $\bot$ values is the ``best'' projection that
is currently available for the current epoch $E$. If there is a single, is currently available for the current epoch $E$. If there is a single,
unanimous value $V_{u}$ for the projection at epoch $E$, then $V_{u}$ unanimous value $V_{u}$ for the projection at epoch $E$, then $V_{u}$
is used to repair all projections stores at $E$ that contain $\bot$ is use to repair all projections stores at $E$ that contain $\bot$
values. If the value of $K$ is not unanimous, then the ``highest values. If the value of $K$ is not unanimous, then the ``highest
ranked value'' $V_{best}$ is used for the repair; see ranked value'' $V_{best}$ is used for the repair; see
Section~\ref{sub:ranking-projections} for a description of projection Section~\ref{sub:ranking-projections} for a description of projection
ranking. ranking.
If a non-$\bot$ value exists, then by definition\footnote{Definition \subsection{Writing to public projection stores}
of a write-once register} this value is immutable. The only \label{sub:proj-store-writing}
conflict resolution path is to write a new projection with a newer and
larger epoch number. Once a public projection with epoch number $E$ is Writing replicas of a projection $P_{new}$ to the cluster's public
written, projections with epochs smaller than $E$ are ignored by projection stores is similar, in principle, to writing a Chain
Replication-managed system or Dynamo-like system. But unlike Chain
Replication, the order doesn't really matter.
In fact, the two steps below may be performed in parallel.
The significant difference with Chain Replication is how we interpret
the return status of each write operation.
\begin{enumerate}
\item Write $P_{new}$ to the local server's public projection store
using $P_{new}$'s epoch number $E$ as the key.
As a side effect, a successful write will trigger
``wedge'' status in the local server, which will then cascade to other
projection-related activity by the local chain manager.
\item Write $P_{new}$ to key $E$ of each remote public projection store of
all participants in the chain.
\end{enumerate}
In cases of {\tt error\_written} status,
the process may be aborted and read repair
triggered. The most common reason for {\tt error\_written} status
is that another actor in the system has
already calculated another (perhaps different) projection using the
same projection epoch number and that
read repair is necessary. The {\tt error\_written} may also
indicate that another server has performed read repair on the exact
projection $P_{new}$ that the local server is trying to write!
\subsection{Writing to private projection stores}
Only the local server/owner may write to the private half of a
projection store. Also, the private projection store is not replicated.
\subsection{Reading from public projection stores}
\label{sub:proj-store-reading}
A read is simple: for an epoch $E$, send a public projection read API
request to all participants. As when writing to the public projection
stores, we can ignore any timeout/unavailable return
status.\footnote{The success/failure status of projection reads and
writes is {\em not} ignored with respect to the chain manager's
internal liveness tracker. However, the liveness tracker's state is
typically only used when calculating new projections.} If we
discover any unwritten values $\bot$, the read repair protocol is
followed.
The minimum number of non-error responses is only one.\footnote{The local
projection store should always be available, even if no other remote
replica projection stores are available.} If all available servers
return a single, unanimous value $V_u, V_u \ne \bot$, then $V_u$ is
the final result for epoch $E$.
Any non-unanimous values are considered complete disagreement for the
epoch. This disagreement is resolved by humming consensus by later
writes to the public projection stores during subsequent iterations of
humming consensus. humming consensus.
We are not concerned with unavailable servers. Humming consensus
only uses as many public projections as are available at the present
moment of time. If some server $S$ is unavailable at time $t$ and
becomes available at some later $t+\delta$, and if at $t+\delta$ we
discover that $S$'s public projection store for key $E$
contains some disagreeing value $V_{weird}$, then the disagreement
will be resolved in the exact same manner that would be used as if we
had found the disagreeing values at the earlier time $t$.
\section{Phases of projection change, a prelude to Humming Consensus} \section{Phases of projection change, a prelude to Humming Consensus}
\label{sec:phases-of-projection-change} \label{sec:phases-of-projection-change}
@ -680,7 +671,7 @@ straightforward; see
Section~\ref{sub:proj-store-writing} for the technique for writing Section~\ref{sub:proj-store-writing} for the technique for writing
projections to all participating servers' projection stores. projections to all participating servers' projection stores.
Humming Consensus does not care Humming Consensus does not care
if the writes succeed or not. The next phase, adopting a if the writes succeed or not: its final phase, adopting a
new projection, will determine which write operations are usable. new projection, will determine which write operations are usable.
\subsection{Adoption a new projection} \subsection{Adoption a new projection}
@ -694,8 +685,8 @@ to avoid direct parallels with protocols such as Raft and Paxos.)
In general, a projection $P_{new}$ at epoch $E_{new}$ is adopted by a In general, a projection $P_{new}$ at epoch $E_{new}$ is adopted by a
server only if server only if
the change in state from the local server's current projection to new the change in state from the local server's current projection to new
projection, $P_{current} \rightarrow P_{new}$, will not cause data loss: projection, $P_{current} \rightarrow P_{new}$ will not cause data loss,
the Update Propagation Invariant and all other safety checks e.g., the Update Propagation Invariant and all other safety checks
required by chain repair in Section~\ref{sec:repair-entire-files} required by chain repair in Section~\ref{sec:repair-entire-files}
are correct. For example, any new epoch must be strictly larger than are correct. For example, any new epoch must be strictly larger than
the current epoch, i.e., $E_{new} > E_{current}$. the current epoch, i.e., $E_{new} > E_{current}$.
@ -705,12 +696,16 @@ available public projection stores. If the result is not a single
unanmous projection, then we return to the step in unanmous projection, then we return to the step in
Section~\ref{sub:projection-calculation}. If the result is a {\em Section~\ref{sub:projection-calculation}. If the result is a {\em
unanimous} projection $P_{new}$ in epoch $E_{new}$, and if $P_{new}$ unanimous} projection $P_{new}$ in epoch $E_{new}$, and if $P_{new}$
does not violate chain safety checks, then the local node will: does not violate chain safety checks, then the local node may
replace its local $P_{current}$ projection with $P_{new}$.
\begin{itemize} Not all safe projection transitions are useful, however. For example,
\item write $P_{current}$ to the local private projection store, and it's trivally safe to suggest projection $P_{zero}$, where the chain
\item set its local operating state $P_{current} \leftarrow P_{new}$. length is zero. In an eventual consistency environment, projection
\end{itemize} $P_{one}$ where the chain length is exactly one is also trivially
safe.\footnote{Although, if the total number of participants is more
than one, eventual consistency would demand that $P_{self}$ cannot
be used forever.}
\section{Humming Consensus} \section{Humming Consensus}
\label{sec:humming-consensus} \label{sec:humming-consensus}
@ -719,11 +714,13 @@ Humming consensus describes consensus that is derived only from data
that is visible/available at the current time. It's OK if a network that is visible/available at the current time. It's OK if a network
partition is in effect and not all chain members are available; partition is in effect and not all chain members are available;
the algorithm will calculate a rough consensus despite not the algorithm will calculate a rough consensus despite not
having input from all chain members. having input from all chain members. Humming consensus
may proceed to make a decision based on data from only one
participant, i.e., only the local node.
\begin{itemize} \begin{itemize}
\item When operating in eventual consistency mode, humming \item When operating in AP mode, i.e., in eventual consistency mode, humming
consensus may reconfigure a chain of length $N$ into $N$ consensus may reconfigure a chain of length $N$ into $N$
independent chains of length 1. When a network partition heals, the independent chains of length 1. When a network partition heals, the
humming consensus is sufficient to manage the chain so that each humming consensus is sufficient to manage the chain so that each
@ -731,12 +728,11 @@ replica's data can be repaired/merged/reconciled safely.
Other features of the Machi system are designed to assist such Other features of the Machi system are designed to assist such
repair safely. repair safely.
\item When operating in strong consistency mode, any \item When operating in CP mode, i.e., in strong consistency mode, humming
chain shorter than the quorum majority of consensus would require additional restrictions. For example, any
all members is invalid and therefore cannot be used. Any server with chain that didn't have a minimum length of the quorum majority size of
a too-short chain cannot not move itself out all members would be invalid and therefore would not move itself out
of wedged state and is therefore unavailable for general file service. of wedged state. In very general terms, this requirement for a quorum
In very general terms, this requirement for a quorum
majority of surviving participants is also a requirement for Paxos, majority of surviving participants is also a requirement for Paxos,
Raft, and ZAB. See Section~\ref{sec:split-brain-management} for a Raft, and ZAB. See Section~\ref{sec:split-brain-management} for a
proposal to handle ``split brain'' scenarios while in CP mode. proposal to handle ``split brain'' scenarios while in CP mode.
@ -756,6 +752,8 @@ Section~\ref{sec:phases-of-projection-change}: network monitoring,
calculating new projections, writing projections, then perhaps calculating new projections, writing projections, then perhaps
adopting the newest projection (which may or may not be the projection adopting the newest projection (which may or may not be the projection
that we just wrote). that we just wrote).
Beginning with Section~\ref{sub:flapping-state}, we provide
additional detail to the rough outline of humming consensus.
\begin{figure*}[htp] \begin{figure*}[htp]
\resizebox{\textwidth}{!}{ \resizebox{\textwidth}{!}{
@ -803,15 +801,15 @@ is used by the flowchart and throughout this section.
\item[$\mathbf{P_{current}}$] The projection actively used by the local \item[$\mathbf{P_{current}}$] The projection actively used by the local
node right now. It is also the projection with largest node right now. It is also the projection with largest
epoch number in the local node's {\em private} projection store. epoch number in the local node's private projection store.
\item[$\mathbf{P_{newprop}}$] A new projection suggestion, as \item[$\mathbf{P_{newprop}}$] A new projection suggestion, as
calculated by the local server calculated by the local server
(Section~\ref{sub:humming-projection-calculation}). (Section~\ref{sub:humming-projection-calculation}).
\item[$\mathbf{P_{latest}}$] The highest-ranked projection with the largest \item[$\mathbf{P_{latest}}$] The highest-ranked projection with the largest
single epoch number that has been read from all available {\em public} single epoch number that has been read from all available public
projection stores. projection stores, including the local node's public projection store.
\item[Unanimous] The $P_{latest}$ projection is unanimous if all \item[Unanimous] The $P_{latest}$ projection is unanimous if all
replicas in all accessible public projection stores are effectively replicas in all accessible public projection stores are effectively
@ -830,7 +828,7 @@ is used by the flowchart and throughout this section.
The flowchart has three columns, from left to right: The flowchart has three columns, from left to right:
\begin{description} \begin{description}
\item[Column A] Is there any reason to act? \item[Column A] Is there any reason to change?
\item[Column B] Do I act? \item[Column B] Do I act?
\item[Column C] How do I act? \item[Column C] How do I act?
\begin{description} \begin{description}
@ -865,12 +863,12 @@ In today's implementation, there is only a single criterion for
determining the alive/perhaps-not-alive status of a remote server $S$: determining the alive/perhaps-not-alive status of a remote server $S$:
is $S$'s projection store available now? This question is answered by is $S$'s projection store available now? This question is answered by
attemping to read the projection store on server $S$. attemping to read the projection store on server $S$.
If successful, then we assume that $S$ and all of $S$'s network services If successful, then we assume that all
are available. If $S$'s projection store is not available for any $S$ is available. If $S$'s projection store is not available for any
reason (including timeout), we inform the local ``fitness server'' reason (including timeout), we assume $S$ is entirely unavailable.
that we have had a problem querying $S$. The fitness service may then This simple single
take additional monitoring/querying actions before informing us (in a criterion appears to be sufficient for humming consensus, according to
later iteration) that $S$ should be considered down. simulations of arbitrary network partitions.
%% {\bf NOTE:} The projection store API is accessed via TCP. The network %% {\bf NOTE:} The projection store API is accessed via TCP. The network
%% partition simulator, mentioned above and described at %% partition simulator, mentioned above and described at
@ -885,10 +883,64 @@ Column~A of Figure~\ref{fig:flowchart}.
See also, Section~\ref{sub:projection-calculation}. See also, Section~\ref{sub:projection-calculation}.
Execution starts at ``Start'' state of Column~A of Execution starts at ``Start'' state of Column~A of
Figure~\ref{fig:flowchart}. Rule $A20$'s uses judgement from the Figure~\ref{fig:flowchart}. Rule $A20$'s uses recent success \&
local ``fitness server'' to select a definite failures in accessing other public projection stores to select a hard
boolean up/down status for each participating server. boolean up/down status for each participating server.
\subsubsection{Calculating flapping state}
Also at this stage, the chain manager calculates its local
``flapping'' state. The name ``flapping'' is borrowed from IP network
engineer jargon ``route flapping'':
\begin{quotation}
``Route flapping is caused by pathological conditions
(hardware errors, software errors, configuration errors, intermittent
errors in communications links, unreliable connections, etc.) within
the network which cause certain reachability information to be
repeatedly advertised and withdrawn.'' \cite{wikipedia-route-flapping}
\end{quotation}
\paragraph{Flapping due to constantly changing network partitions and/or server crashes and restarts}
Currently, Machi does not attempt to dampen, smooth, or ignore recent
history of constantly flapping peer servers. If necessary, a failure
detector such as the $\phi$ accrual failure detector
\cite{phi-accrual-failure-detector} can be used to help mange such
situations.
\paragraph{Flapping due to asymmetric network partitions}
The simulator's behavior during stable periods where at least one node
is the victim of an asymmetric network partition is \ldots weird,
wonderful, and something I don't completely understand yet. This is
another place where we need more eyes reviewing and trying to poke
holes in the algorithm.
In cases where any node is a victim of an asymmetric network
partition, the algorithm oscillates in a very predictable way: each
server $S$ makes the same $P_{new}$ projection at epoch $E$ that $S$ made
during a previous recent epoch $E-\delta$ (where $\delta$ is small, usually
much less than 10). However, at least one node makes a suggestion that
makes rough consensus impossible. When any epoch $E$ is not
acceptable (because some node disagrees about something, e.g.,
which nodes are down),
the result is more new rounds of suggestions that create a repeating
loop that lasts as long as the asymmetric partition lasts.
From the perspective of $S$'s chain manager, the pattern of this
infinite loop is easy to detect: $S$ inspects the pattern of the last
$L$ projections that it has suggested, e.g., the last 10.
Tiny details such as the epoch number and creation timestamp will
differ, but the major details such as UPI list and repairing list are
the same.
If the major details of the last $L$ projections authored and
suggested by $S$ are the same, then $S$ unilaterally decides that it
is ``flapping'' and enters flapping state. See
Section~\ref{sub:flapping-state} for additional disucssion of the
flapping state.
\subsubsection{When to calculate a new projection} \subsubsection{When to calculate a new projection}
\label{ssub:when-to-calc} \label{ssub:when-to-calc}
@ -897,7 +949,7 @@ calculate a new projection. The timer interval is typically
0.5--2.0 seconds, if the cluster has been stable. A client may call an 0.5--2.0 seconds, if the cluster has been stable. A client may call an
external API call to trigger a new projection, e.g., if that client external API call to trigger a new projection, e.g., if that client
knows that an environment change has happened and wishes to trigger a knows that an environment change has happened and wishes to trigger a
response prior to the next timer firing (e.g.~at state $C200$). response prior to the next timer firing.
It's recommended that the timer interval be staggered according to the It's recommended that the timer interval be staggered according to the
participant ranking rules in Section~\ref{sub:ranking-projections}; participant ranking rules in Section~\ref{sub:ranking-projections};
@ -918,14 +970,15 @@ done by state $C110$ and that writing a public projection is done by
states $C300$ and $C310$. states $C300$ and $C310$.
Broadly speaking, there are a number of decisions made in all three Broadly speaking, there are a number of decisions made in all three
columns of Figure~\ref{fig:flowchart} to decide if and when a columns of Figure~\ref{fig:flowchart} to decide if and when any type
projection should be written at all. Sometimes, the best action is of projection should be written at all. Sometimes, the best action is
to do nothing. to do nothing.
\subsubsection{Column A: Is there any reason to change?} \subsubsection{Column A: Is there any reason to change?}
The main tasks of the flowchart states in Column~A is to calculate a The main tasks of the flowchart states in Column~A is to calculate a
new projection $P_{new}$. Then we try to figure out which new projection $P_{new}$ and perhaps also the inner projection
$P_{new2}$ if we're in flapping mode. Then we try to figure out which
projection has the greatest merit: our current projection projection has the greatest merit: our current projection
$P_{current}$, the new projection $P_{new}$, or the latest epoch $P_{current}$, the new projection $P_{new}$, or the latest epoch
$P_{latest}$. If our local $P_{current}$ projection is best, then $P_{latest}$. If our local $P_{current}$ projection is best, then
@ -958,7 +1011,7 @@ The main decisions that states in Column B need to make are:
It's notable that if $P_{new}$ is truly the best projection available It's notable that if $P_{new}$ is truly the best projection available
at the moment, it must always first be written to everyone's at the moment, it must always first be written to everyone's
public projection stores and only afterward processed through another public projection stores and only then processed through another
monitor \& calculate loop through the flowchart. monitor \& calculate loop through the flowchart.
\subsubsection{Column C: How do I act?} \subsubsection{Column C: How do I act?}
@ -1000,14 +1053,14 @@ therefore the suggested projections at epoch $E$ are not unanimous.
\paragraph{\#2: The transition from current $\rightarrow$ new projection is \paragraph{\#2: The transition from current $\rightarrow$ new projection is
safe} safe}
Given the current projection Given the projection that the server is currently using,
$P_{current}$, the projection $P_{latest}$ is evaluated by numerous $P_{current}$, the projection $P_{latest}$ is evaluated by numerous
rules and invariants, relative to $P_{current}$. rules and invariants, relative to $P_{current}$.
If such rule or invariant is If such rule or invariant is
violated/false, then the local server will discard $P_{latest}$. violated/false, then the local server will discard $P_{latest}$.
The transition from $P_{current} \rightarrow P_{latest}$ is protected The transition from $P_{current} \rightarrow P_{latest}$ is checked
by rules and invariants that include: for safety and sanity. The conditions used for the check include:
\begin{enumerate} \begin{enumerate}
\item The Erlang data types of all record members are correct. \item The Erlang data types of all record members are correct.
@ -1020,13 +1073,161 @@ by rules and invariants that include:
The same re-reordering restriction applies to all The same re-reordering restriction applies to all
servers in $P_{latest}$'s repairing list relative to servers in $P_{latest}$'s repairing list relative to
$P_{current}$'s repairing list. $P_{current}$'s repairing list.
\item Any server $S$ that is newly-added to $P_{latest}$'s UPI list must \item Any server $S$ that was added to $P_{latest}$'s UPI list must
appear in the tail the UPI list. Furthermore, $S$ must have been in appear in the tail the UPI list. Furthermore, $S$ must have been in
$P_{current}$'s repairing list and had successfully completed file $P_{current}$'s repairing list and had successfully completed file
repair prior to $S$'s promotion from the repairing list to the tail repair prior to the transition.
of the UPI list.
\end{enumerate} \end{enumerate}
\subsection{Additional discussion of flapping state}
\label{sub:flapping-state}
All $P_{new}$ projections
calculated while in flapping state have additional diagnostic
information added, including:
\begin{itemize}
\item Flag: server $S$ is in flapping state.
\item Epoch number \& wall clock timestamp when $S$ entered flapping state.
\item The collection of all other known participants who are also
flapping (with respective starting epoch numbers).
\item A list of nodes that are suspected of being partitioned, called the
``hosed list''. The hosed list is a union of all other hosed list
members that are ever witnessed, directly or indirectly, by a server
while in flapping state.
\end{itemize}
\subsubsection{Flapping diagnostic data accumulates}
While in flapping state, this diagnostic data is gathered from
all available participants and merged together in a CRDT-like manner.
Once added to the diagnostic data list, a datum remains until
$S$ drops out of flapping state. When flapping state stops, all
accumulated diagnostic data is discarded.
This accumulation of diagnostic data in the projection data
structure acts in part as a substitute for a separate gossip protocol.
However, since all participants are already communicating with each
other via read \& writes to each others' projection stores, the diagnostic
data can propagate in a gossip-like manner via the projection stores.
\subsubsection{Flapping example (part 1)}
\label{ssec:flapping-example}
Any server listed in the ``hosed list'' is suspected of having some
kind of network communication problem with some other server. For
example, let's examine a scenario involving a Machi cluster of servers
$a$, $b$, $c$, $d$, and $e$. Assume there exists an asymmetric network
partition such that messages from $a \rightarrow b$ are dropped, but
messages from $b \rightarrow a$ are delivered.\footnote{If this
partition were happening at or below the level of a reliable
delivery network protocol like TCP, then communication in {\em both}
directions would be affected by an asymmetric partition.
However, in this model, we are
assuming that a ``message'' lost during a network partition is a
uni-directional projection API call or its response.}
Once a participant $S$ enters flapping state, it starts gathering the
flapping starting epochs and hosed lists from all of the other
projection stores that are available. The sum of this info is added
to all projections calculated by $S$.
For example, projections authored by $a$ will say that $a$ believes
that $b$ is down.
Likewise, projections authored by $b$ will say that $b$ believes
that $a$ is down.
\subsubsection{The inner projection (flapping example, part 2)}
\label{ssec:inner-projection}
\ldots We continue the example started in the previous subsection\ldots
Eventually, in a gossip-like manner, all other participants will
eventually find that their hosed list is equal to $[a,b]$. Any other
server, for example server $c$, will then calculate another
projection, $P_{new2}$, using the assumption that both $a$ and $b$
are down in addition to all other known unavailable servers.
\begin{itemize}
\item If operating in the default CP mode, both $a$ and $b$ are down
and therefore not eligible to participate in Chain Replication.
%% The chain may continue service if a $c$, $d$, $e$ and/or witness
%% servers can try to form a correct UPI list for the chain.
This may cause an availability problem for the chain: we may not
have a quorum of participants (real or witness-only) to form a
correct UPI chain.
\item If operating in AP mode, $a$ and $b$ can still form two separate
chains of length one, using UPI lists of $[a]$ and $[b]$, respectively.
\end{itemize}
This re-calculation, $P_{new2}$, of the new projection is called an
``inner projection''. The inner projection definition is nested
inside of its parent projection, using the same flapping disagnostic
data used for other flapping status tracking.
When humming consensus has determined that a projection state change
is necessary and is also safe (relative to both the outer and inner
projections), then the outer projection\footnote{With the inner
projection $P_{new2}$ nested inside of it.} is written to
the local private projection store.
With respect to future iterations of
humming consensus, the innter projection is ignored.
However, with respect to Chain Replication, the server's subsequent
behavior
{\em will consider the inner projection only}. The inner projection
is used to order the UPI and repairing parts of the chain and trigger
wedge/un-wedge behavior. The inner projection is also
advertised to Machi clients.
The epoch of the inner projection, $E^{inner}$ is always less than or
equal to the epoch of the outer projection, $E$. The $E^{inner}$
epoch typically only changes when new servers are added to the hosed
list.
To attempt a rough analogy, the outer projection is the carrier wave
that is used to transmit the inner projection and its accompanying
gossip of diagnostic data.
\subsubsection{Outer projection churn, inner projection stability}
One of the intriguing features of humming consensus's reaction to
asymmetric partition: flapping behavior continues for as long as
an any asymmetric partition exists.
\subsubsection{Stability in symmetric partition cases}
Although humming consensus hasn't been formally proven to handle all
asymmetric and symmetric partition cases, the current implementation
appears to converge rapidly to a single chain state in all symmetric
partition cases. This is in contrast to asymmetric partition cases,
where ``flapping'' will continue on every humming consensus iteration
until all asymmetric partition disappears. A formal proof is an area of
future work.
\subsubsection{Leaving flapping state and discarding inner projection}
There are two events that can trigger leaving flapping state.
\begin{itemize}
\item A server $S$ in flapping state notices that its long history of
repeatedly suggesting the same projection will be broken:
$S$ instead calculates some differing projection instead.
This change in projection history happens whenever a perceived network
partition changes in any way.
\item Server $S$ reads a public projection suggestion, $P_{noflap}$, that is
authored by another server $S'$, and that $P_{noflap}$ no longer
contains the flapping start epoch for $S'$ that is present in the
history that $S$ has maintained while $S$ has been in
flapping state.
\end{itemize}
When either trigger event happens, server $S$ will exit flapping state. All
new projections authored by $S$ will have all flapping diagnostic data
removed. This includes stopping use of the inner projection: the UPI
list of the inner projection is copied to the outer projection's UPI
list, to avoid a drastic change in UPI membership.
\subsection{Ranking projections} \subsection{Ranking projections}
\label{sub:ranking-projections} \label{sub:ranking-projections}
@ -1279,7 +1480,7 @@ as the foundation for Machi's data loss prevention techniques.
\begin{figure} \begin{figure}
\centering \centering
$ $
[\overbrace{\underbrace{H_1}_\textbf{Head}, M_{11}, \ldots, T_1, [\overbrace{\underbrace{H_1}_\textbf{Head}, M_{11}, T_1,
H_2, M_{21}, H_2, M_{21},
\ldots \ldots
\underbrace{T_2}_\textbf{Tail}}^\textbf{Chain (U.P.~Invariant preserving)} \underbrace{T_2}_\textbf{Tail}}^\textbf{Chain (U.P.~Invariant preserving)}
@ -1588,7 +1789,7 @@ Manageability, availability and performance in Porcupine: a highly scalable, clu
{\tt http://homes.cs.washington.edu/\%7Elevy/ porcupine.pdf} {\tt http://homes.cs.washington.edu/\%7Elevy/ porcupine.pdf}
\bibitem{chain-replication} \bibitem{chain-replication}
van Renesse, Robbert and Schneider, Fred. van Renesse, Robbert et al.
Chain Replication for Supporting High Throughput and Availability. Chain Replication for Supporting High Throughput and Availability.
Proceedings of the 6th Conference on Symposium on Operating Systems Proceedings of the 6th Conference on Symposium on Operating Systems
Design \& Implementation (OSDI'04) - Volume 6, 2004. Design \& Implementation (OSDI'04) - Volume 6, 2004.

View file

@ -1489,7 +1489,7 @@ In Usenix ATC 2009.
{\tt https://www.usenix.org/legacy/event/usenix09/ tech/full\_papers/terrace/terrace.pdf} {\tt https://www.usenix.org/legacy/event/usenix09/ tech/full\_papers/terrace/terrace.pdf}
\bibitem{chain-replication} \bibitem{chain-replication}
van Renesse, Robbert and Schneider, Fred. van Renesse, Robbert et al.
Chain Replication for Supporting High Throughput and Availability. Chain Replication for Supporting High Throughput and Availability.
Proceedings of the 6th Conference on Symposium on Operating Systems Proceedings of the 6th Conference on Symposium on Operating Systems
Design \& Implementation (OSDI'04) - Volume 6, 2004. Design \& Implementation (OSDI'04) - Volume 6, 2004.

View file

@ -0,0 +1,12 @@
####################### patterns for general errors in dep modules:
^protobuffs\.erl:
^protobuffs_[a-z_]*\.erl:
^leexinc\.hrl:[0-9][0-9]*:
^machi_chain_manager1.erl:[0-9][0-9]*: Guard test RetrospectiveP::'false' =:= 'true' can never succeed
^machi_pb\.erl:[0-9][0-9]*:
^pokemon_pb\.erl:[0-9][0-9]*:
####################### patterns for unknown functions:
^ basho_bench_config:get/2
^ erl_prettypr:format/1
^ erl_syntax:form_list/1
^ machi_partition_simulator:get/1

View file

@ -18,9 +18,10 @@
%% %%
%% ------------------------------------------------------------------- %% -------------------------------------------------------------------
%% @doc Now 4GiBytes, could be up to 64bit due to PB message limit of -define(MAX_FILE_SIZE, 256*1024*1024). % 256 MBytes
%% chunk size -define(MAX_CHUNK_SIZE, ((1 bsl 32) - 1)).
-define(DEFAULT_MAX_FILE_SIZE, ((1 bsl 32) - 1)). %% -define(DATA_DIR, "/Volumes/SAM1/seq-tests/data").
-define(DATA_DIR, "./data").
-define(MINIMUM_OFFSET, 1024). -define(MINIMUM_OFFSET, 1024).
%% 0th draft of checksum typing with 1st byte. %% 0th draft of checksum typing with 1st byte.
@ -29,35 +30,8 @@
-define(CSUM_TAG_SERVER_SHA, 2). % Server-genereated SHA1 -define(CSUM_TAG_SERVER_SHA, 2). % Server-genereated SHA1
-define(CSUM_TAG_SERVER_REGEN_SHA, 3). % Server-regenerated SHA1 -define(CSUM_TAG_SERVER_REGEN_SHA, 3). % Server-regenerated SHA1
-define(CSUM_TAG_NONE_ATOM, none).
-define(CSUM_TAG_CLIENT_SHA_ATOM, client_sha).
-define(CSUM_TAG_SERVER_SHA_ATOM, server_sha).
-define(CSUM_TAG_SERVER_REGEN_SHA_ATOM, server_regen_sha).
%% Protocol Buffers goop %% Protocol Buffers goop
-define(PB_MAX_MSG_SIZE, (33*1024*1024)). -define(PB_MAX_MSG_SIZE, (33*1024*1024)).
-define(PB_PACKET_OPTS, [{packet, 4}, {packet_size, ?PB_MAX_MSG_SIZE}]). -define(PB_PACKET_OPTS, [{packet, 4}, {packet_size, ?PB_MAX_MSG_SIZE}]).
%% TODO: it's used in flu_sup and elsewhere, change this to suitable name
-define(TEST_ETS_TABLE, test_ets_table). -define(TEST_ETS_TABLE, test_ets_table).
-define(DEFAULT_COC_NAMESPACE, "").
-define(DEFAULT_COC_LOCATOR, 0).
-record(ns_info, {
version = 0 :: machi_dt:namespace_version(),
name = <<>> :: machi_dt:namespace(),
locator = 0 :: machi_dt:locator()
}).
-record(append_opts, {
chunk_extra = 0 :: machi_dt:chunk_size(),
preferred_file_name :: 'undefined' | machi_dt:file_name_s(),
flag_fail_preferred = false :: boolean()
}).
-record(read_opts, {
no_checksum = false :: boolean(),
no_chunk = false :: boolean(),
needs_trimmed = false :: boolean()
}).

View file

@ -1,20 +0,0 @@
%% machi merkle tree records
-record(naive, {
chunk_size = 1048576 :: pos_integer(), %% default 1 MB
recalc = true :: boolean(),
root :: 'undefined' | binary(),
lvl1 = [] :: [ binary() ],
lvl2 = [] :: [ binary() ],
lvl3 = [] :: [ binary() ],
leaves = [] :: [ { Offset :: pos_integer(),
Size :: pos_integer(),
Csum :: binary()} ]
}).
-record(mt, {
filename :: string(),
tree :: #naive{},
backend = 'naive' :: 'naive'
}).

View file

@ -1,6 +1,6 @@
%% ------------------------------------------------------------------- %% -------------------------------------------------------------------
%% %%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved. %% Copyright (c) 2007-2014 Basho Technologies, Inc. All Rights Reserved.
%% %%
%% This file is provided to you under the Apache License, %% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file %% Version 2.0 (the "License"); you may not use this file
@ -22,11 +22,10 @@
-define(MACHI_PROJECTION_HRL, true). -define(MACHI_PROJECTION_HRL, true).
-type pv1_consistency_mode() :: 'ap_mode' | 'cp_mode'. -type pv1_consistency_mode() :: 'ap_mode' | 'cp_mode'.
-type pv1_chain_name():: atom().
-type pv1_csum() :: binary(). -type pv1_csum() :: binary().
-type pv1_epoch() :: {pv1_epoch_n(), pv1_csum()}. -type pv1_epoch() :: {pv1_epoch_n(), pv1_csum()}.
-type pv1_epoch_n() :: non_neg_integer(). -type pv1_epoch_n() :: non_neg_integer().
-type pv1_server() :: atom(). -type pv1_server() :: atom() | binary().
-type pv1_timestamp() :: {non_neg_integer(), non_neg_integer(), non_neg_integer()}. -type pv1_timestamp() :: {non_neg_integer(), non_neg_integer(), non_neg_integer()}.
-record(p_srvr, { -record(p_srvr, {
@ -56,7 +55,6 @@
epoch_number :: pv1_epoch_n() | ?SPAM_PROJ_EPOCH, epoch_number :: pv1_epoch_n() | ?SPAM_PROJ_EPOCH,
epoch_csum :: pv1_csum(), epoch_csum :: pv1_csum(),
author_server :: pv1_server(), author_server :: pv1_server(),
chain_name = ch_not_def_yet :: pv1_chain_name(),
all_members :: [pv1_server()], all_members :: [pv1_server()],
witnesses = [] :: [pv1_server()], witnesses = [] :: [pv1_server()],
creation_time :: pv1_timestamp(), creation_time :: pv1_timestamp(),
@ -77,16 +75,4 @@
%% create a consistent projection ranking score. %% create a consistent projection ranking score.
-define(MAX_CHAIN_LENGTH, 64). -define(MAX_CHAIN_LENGTH, 64).
-record(chain_def_v1, {
name :: atom(), % chain name
mode :: pv1_consistency_mode(),
full = [] :: [p_srvr()],
witnesses = [] :: [p_srvr()],
old_full = [] :: [pv1_server()], % guard against some races
old_witnesses=[] :: [pv1_server()], % guard against some races
local_run = [] :: [pv1_server()], % must be tailored to each machine!
local_stop = [] :: [pv1_server()], % must be tailored to each machine!
props = [] :: list() % proplist for other related info
}).
-endif. % !MACHI_PROJECTION_HRL -endif. % !MACHI_PROJECTION_HRL

View file

@ -1,56 +0,0 @@
#!/bin/sh
echo "Step: Verify that the required entries in /etc/hosts are present"
for i in 1 2 3; do
grep machi$i /etc/hosts | egrep -s '^127.0.0.1' > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo ""
echo "'grep -s machi$i' failed. Aborting, sorry."
exit 1
fi
ping -c 1 machi$i > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo ""
echo "Ping attempt on host machi$i failed. Aborting."
echo ""
ping -c 1 machi$i
exit 1
fi
done
echo "Step: add a verbose logging option to app.config"
for i in 1 2 3; do
ed ./dev/dev$i/etc/app.config <<EOF > /dev/null 2>&1
/verbose_confirm
a
{chain_manager_opts, [{private_write_verbose_confirm,true}]},
{stability_time, 1},
.
w
q
EOF
done
echo "Step: start three three Machi application instances"
for i in 1 2 3; do
./dev/dev$i/bin/machi start
./dev/dev$i/bin/machi ping
if [ $? -ne 0 ]; then
echo "Sorry, a 'ping' check for instance dev$i failed. Aborting."
exit 1
fi
done
echo "Step: configure one chain to start a Humming Consensus group with three members"
# Note: $CWD of each Machi proc is two levels below the source code root dir.
LIFECYCLE000=../../priv/quick-admin-examples/demo-000
for i in 3 2 1; do
./dev/dev$i/bin/machi-admin quick-admin-apply $LIFECYCLE000 machi$i
if [ $? -ne 0 ]; then
echo "Sorry, 'machi-admin quick-admin-apply failed' on machi$i. Aborting."
exit 1
fi
done
exit 0

View file

@ -1,93 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
# If this Vagrant box has not been downloaded before (e.g. using "vagrant box add"),
# then Vagrant will automatically download the VM image from HashiCorp.
config.vm.box = "hashicorp/precise64"
# If using a FreeBSD box, Bash may not be installed.
# Use the config.ssh.shell setting to specify an alternate shell.
# Note, however, that any code in the 'config.vm.provision' section
# would then have to use this shell's syntax!
# config.ssh.shell = "/bin/csh -l"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
# vb.gui = true
# Customize the amount of memory on the VM:
vb.memory = "512"
end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: <<-SHELL
# Install prerequsites
# Support here for FreeBSD is experimental
apt-get update ; sudo apt-get install -y git sudo rsync ; # Ubuntu Linux
env ASSUME_ALWAYS_YES=yes pkg install -f git sudo rsync ; # FreeBSD 10
# Install dependent packages, using slf-configurator
git clone https://github.com/slfritchie/slf-configurator.git
chown -R vagrant ./slf-configurator
(cd slf-configurator ; sudo sh -x ./ALL.sh)
echo 'export PATH=${PATH}:/usr/local/erlang/17.5/bin' >> ~vagrant/.bashrc
export PATH=${PATH}:/usr/local/erlang/17.5/bin
## echo 'set path = ( $path /usr/local/erlang/17.5/bin )' >> ~vagrant/.cshrc
## setenv PATH /usr/local/erlang/17.5/bin:$PATH
git clone https://github.com/basho/machi.git
(cd machi ; git checkout master ; make && make test )
chown -R vagrant ./machi
SHELL
end

View file

@ -36,7 +36,7 @@ while (<I>) {
$indent = " " x ($count * 4); $indent = " " x ($count * 4);
s/^#*\s*[0-9. ]*//; s/^#*\s*[0-9. ]*//;
$anchor = "n$label"; $anchor = "n$label";
printf T1 "%s+ [%s. %s](#%s)\n", $indent, $label, $_, $anchor; printf T1 "%s+ [%s %s](#%s)\n", $indent, $label, $_, $anchor;
printf T2 "<a name=\"%s\">\n", $anchor; printf T2 "<a name=\"%s\">\n", $anchor;
$line =~ s/(#+)\s*[0-9. ]*/$1 $label. /; $line =~ s/(#+)\s*[0-9. ]*/$1 $label. /;
print T2 $line; print T2 $line;

View file

@ -1 +0,0 @@
{host, "localhost", []}.

View file

@ -1,4 +0,0 @@
{flu,f1,"localhost",20401,[]}.
{flu,f2,"localhost",20402,[]}.
{flu,f3,"localhost",20403,[]}.
{chain,c1,[f1,f2,f3],[]}.

View file

@ -1,4 +0,0 @@
{flu,f4,"localhost",20404,[]}.
{flu,f5,"localhost",20405,[]}.
{flu,f6,"localhost",20406,[]}.
{chain,c2,[f4,f5,f6],[]}.

View file

@ -1,7 +0,0 @@
{host, "machi1", []}.
{host, "machi2", []}.
{host, "machi3", []}.
{flu,f1,"machi1",20401,[]}.
{flu,f2,"machi2",20402,[]}.
{flu,f3,"machi3",20403,[]}.
{chain,c1,[f1,f2,f3],[]}.

View file

@ -4,7 +4,6 @@ if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then
echo '$TRAVIS_PULL_REQUEST is false, skipping tests' echo '$TRAVIS_PULL_REQUEST is false, skipping tests'
exit 0 exit 0
else else
echo '$TRAVIS_PULL_REQUEST is not false ($TRAVIS_PULL_REQUEST), running tests' echo '$TRAVIS_PULL_REQUEST is not false, running tests'
make test make test
make dialyzer
fi fi

View file

@ -1,20 +1,15 @@
{require_otp_vsn, "17|18"}. {require_otp_vsn, "17"}.
%%% {erl_opts, [warnings_as_errors, {parse_transform, lager_transform}, debug_info]}. %%% {erl_opts, [warnings_as_errors, {parse_transform, lager_transform}, debug_info]}.
{erl_opts, [{parse_transform, lager_transform}, debug_info]}. {erl_opts, [{parse_transform, lager_transform}, debug_info]}.
{edoc_opts, [{dir, "./edoc"}]}. {edoc_opts, [{dir, "./edoc"}]}.
{deps, [ {deps, [
{cuttlefish, ".*", {git, "git://github.com/basho/cuttlefish.git", {branch, "develop"}}},
{sext, ".*", {git, "git://github.com/basho/sext.git", {branch, "master"}}},
{eleveldb, ".*", {git, "git://github.com/basho/eleveldb.git", {branch, "develop"}}},
{lager, ".*", {git, "git://github.com/basho/lager.git", {tag, "2.2.0"}}}, {lager, ".*", {git, "git://github.com/basho/lager.git", {tag, "2.2.0"}}},
{protobuffs, "0.8.*", {git, "git://github.com/basho/erlang_protobuffs.git", {tag, "0.8.1p4"}}}, {protobuffs, "0.8.*", {git, "git://github.com/basho/erlang_protobuffs.git", {tag, "0.8.1p4"}}},
{riak_dt, ".*", {git, "git://github.com/basho/riak_dt.git", {branch, "develop"}}}, {riak_dt, ".*", {git, "git://github.com/basho/riak_dt.git", {branch, "develop"}}},
{ranch, ".*", {git, "git://github.com/ninenines/ranch.git", {branch, "master"}}},
{node_package, ".*", {git, "git://github.com/basho/node_package.git", {branch, "develop"}}}, {node_package, ".*", {git, "git://github.com/basho/node_package.git", {branch, "develop"}}},
{eper, ".*", {git, "git://github.com/basho/eper.git", {tag, "0.92-basho1"}}}, {eper, ".*", {git, "git://github.com/basho/eper.git", {tag, "0.78"}}}
{cluster_info, ".*", {git, "git://github.com/basho/cluster_info", {branch, "develop"}}}
]}. ]}.
{sub_dirs, ["rel", "apps/machi"]}. {sub_dirs, ["rel", "apps/machi"]}.

View file

@ -1,35 +1,25 @@
[ [
{machi, [ {machi, [
%% Data directory for all FLUs. %% Data directory for all FLUs.
{flu_data_dir, "{{platform_data_dir}}/flu"}, {flu_data_dir, "{{platform_data_dir}}"},
%% FLU config directory
{flu_config_dir, "{{platform_etc_dir}}/flu-config"},
%% Chain config directory
{chain_config_dir, "{{platform_etc_dir}}/chain-config"},
%% FLUs to start at app start. %% FLUs to start at app start.
%% This task has moved to machi_flu_sup and machi_lifecycle_mgr. {initial_flus, [
%% Remember, this is a list, so separate all tuples
%% with a comma.
%%
%% {Name::atom(), Port::pos_integer(), proplist()}
%%
%% For example: {my_name_is_a, 12500, []}
]},
%% Number of metadata manager processes to run per FLU. %% Number of metadata manager processes to run per FLU.
%% Default = 10 %% Default = 10
%% {metadata_manager_count, 2}, %% {metadata_manager_count, 2},
%% Default options for chain manager processes.
%% {chain_manager_opts, [{private_write_verbose,true},
%% {private_write_verbose_confirm,true}]},
%% Platform vars (mirror of reltool packaging)
{platform_data_dir, "{{platform_data_dir}}"},
{platform_etc_dir, "{{platform_etc_dir}}"},
%% Do not delete, do not put Machi config items after this line. %% Do not delete, do not put Machi config items after this line.
{final_comma_stopper, do_not_delete} {final_comma_stopper, do_not_delete}
] ]
},
{lager, [
{error_logger_hwm, 5000} % lager's default of 50/sec is too low
]
} }
]. ].

View file

@ -22,41 +22,23 @@ cd $RUNNER_BASE_DIR
SCRIPT=`basename $0` SCRIPT=`basename $0`
usage() { usage() {
echo "Usage: $SCRIPT { quick-admin-check | quick-admin-apply | " echo "Usage: $SCRIPT { test | "
echo " top }" echo " top }"
} }
case "$1" in case "$1" in
quick-admin-check) test)
# Make sure the local node IS running # Make sure the local node IS running
node_up_check node_up_check
shift shift
NODE_NAME=${NAME_ARG#* } # target machi server node name # Parse out the node name to pass to the client
IN_FILE="$1" NODE_NAME=${NAME_ARG#* }
$ERTS_PATH/erl -noshell -noinput $NAME_PARAM machi_test$NAME_HOST $COOKIE_ARG \ $ERTS_PATH/erl -noshell $NAME_PARAM machi_test$NAME_HOST $COOKIE_ARG \
-remsh $NODE_NAME \ -pa $RUNNER_LIB_DIR/basho-patches \
-eval "Me = self(), spawn('"$NODE_NAME"', fun() -> X = (catch(machi_lifecycle_mgr:quick_admin_sanity_check(\"$IN_FILE\"))), Me ! {res, X} end), XX = receive {res, Res} -> Res after 10*1000 -> timeout end, io:format(user, \"Result: ~p\n\", [XX]), case XX of \ -eval "case catch(machi:client_test(\"$NODE_NAME\")) of \
ok -> init:stop(); \
_ -> init:stop(1) \
end."
;;
quick-admin-apply)
# Make sure the local node IS running
node_up_check
shift
NODE_NAME=${NAME_ARG#* } # target machi server node name
IN_FILE="$1"
RELATIVE_HOST="$2"
$ERTS_PATH/erl -noshell -noinput $NAME_PARAM machi_test$NAME_HOST $COOKIE_ARG \
-remsh $NODE_NAME \
-eval "Me = self(), spawn('"$NODE_NAME"', fun() -> X = (catch(machi_lifecycle_mgr:quick_admin_apply(\"$IN_FILE\", \"$RELATIVE_HOST\"))), Me ! {res, X} end), XX = receive {res, Res} -> Res after 10*1000 -> timeout end, io:format(user, \"Result: ~p\n\", [XX]), case XX of \
ok -> init:stop(); \ ok -> init:stop(); \
_ -> init:stop(1) \ _ -> init:stop(1) \
end." end."

View file

@ -1,16 +0,0 @@
#! /bin/sh
#
# Example usage: gen_dev dev4 vars.src vars
#
# Generate an overlay config for devNNN from vars.src and write to vars
#
NAME=$1
TEMPLATE=$2
VARFILE=$3
NODE="$NAME@127.0.0.1"
echo "Generating $NAME - node='$NODE'"
sed -e "s/@NODE@/$NODE/" \
< $TEMPLATE > $VARFILE

View file

@ -47,7 +47,6 @@
{overlay, [ {overlay, [
{mkdir, "data"}, {mkdir, "data"},
{mkdir, "data/^PRESERVE"},
{mkdir, "log"}, {mkdir, "log"},
%% Copy base files for starting and interacting w/ node %% Copy base files for starting and interacting w/ node
@ -94,20 +93,6 @@
{template, "files/vm.args", "etc/vm.args"}, {template, "files/vm.args", "etc/vm.args"},
{template, "files/app.config", "etc/app.config"}, {template, "files/app.config", "etc/app.config"},
{mkdir, "etc/chain-config"},
{mkdir, "etc/flu-config"},
{mkdir, "etc/pending"},
{mkdir, "etc/rejected"},
%% Experiment: quick-admin
{mkdir, "etc/quick-admin-archive"},
{mkdir, "priv"},
{mkdir, "priv/quick-admin-examples"},
{copy, "../priv/quick-admin-examples/000", "priv/quick-admin-examples"},
{copy, "../priv/quick-admin-examples/001", "priv/quick-admin-examples"},
{copy, "../priv/quick-admin-examples/002", "priv/quick-admin-examples"},
{copy, "../priv/quick-admin-examples/demo-000", "priv/quick-admin-examples/demo-000"},
{mkdir, "lib/basho-patches"} {mkdir, "lib/basho-patches"}
%% {copy, "../apps/machi/ebin/etop_txt.beam", "lib/basho-patches"} %% {copy, "../apps/machi/ebin/etop_txt.beam", "lib/basho-patches"}
]}. ]}.

View file

@ -1,9 +1,6 @@
%% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*- %% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*-
%% ex: ft=erlang ts=4 sw=4 et %% ex: ft=erlang ts=4 sw=4 et
%% NOTE: When modifying this file, also keep its near cousin
%% config file rel/vars/dev_vars.config.src in sync!
%% Platform-specific installation paths %% Platform-specific installation paths
{platform_bin_dir, "./bin"}. {platform_bin_dir, "./bin"}.
{platform_data_dir, "./data"}. {platform_data_dir, "./data"}.

View file

@ -1,48 +0,0 @@
%% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*-
%% ex: ft=erlang ts=4 sw=4 et
%% NOTE: When modifying this file, also keep its near cousin
%% config file rel/vars/dev_vars.config.src in sync!
%% Platform-specific installation paths
{platform_bin_dir, "./bin"}.
{platform_data_dir, "./data"}.
{platform_etc_dir, "./etc"}.
{platform_lib_dir, "./lib"}.
{platform_log_dir, "./log"}.
%%
%% etc/app.config
%%
{sasl_error_log, "{{platform_log_dir}}/sasl-error.log"}.
{sasl_log_dir, "{{platform_log_dir}}/sasl"}.
%% lager
{console_log_default, file}.
%%
%% etc/vm.args
%%
{node, "@NODE@"}.
{crash_dump, "{{platform_log_dir}}/erl_crash.dump"}.
%%
%% bin/machi
%%
{runner_script_dir, "\`cd \\`dirname $0\\` 1>/dev/null && /bin/pwd\`"}.
{runner_base_dir, "{{runner_script_dir}}/.."}.
{runner_etc_dir, "$RUNNER_BASE_DIR/etc"}.
{runner_log_dir, "$RUNNER_BASE_DIR/log"}.
{runner_lib_dir, "$RUNNER_BASE_DIR/lib"}.
{runner_patch_dir, "$RUNNER_BASE_DIR/lib/basho-patches"}.
{pipe_dir, "/tmp/$RUNNER_BASE_DIR/"}.
{runner_user, ""}.
{runner_wait_process, "machi_flu_sup"}.
{runner_ulimit_warn, 65536}.
%%
%% cuttlefish
%%
{cuttlefish, ""}. % blank = off
{cuttlefish_conf, "machi.conf"}.

View file

@ -1,10 +1,13 @@
{application, machi, [ {application, machi, [
{description, "A village of write-once files."}, {description, "A village of write-once files."},
{vsn, "0.0.1"}, {vsn, "0.0.0"},
{applications, [kernel, stdlib, crypto, cluster_info, ranch]}, {applications, [kernel, stdlib, crypto]},
{mod,{machi_app,[]}}, {mod,{machi_app,[]}},
{registered, []}, {registered, []},
{env, [ {env, [
%% Don't use this static env for defaults, or we will fall into config hell. {flu_list,
[
%%%%%% {flu_a, 32900, "./data.flu_a"}
]}
]} ]}
]}. ]}.

View file

@ -48,10 +48,9 @@ enum Mpb_GeneralStatusCode {
PARTITION = 4; PARTITION = 4;
NOT_WRITTEN = 5; NOT_WRITTEN = 5;
WRITTEN = 6; WRITTEN = 6;
TRIMMED = 7; // The whole file was trimmed NO_SUCH_FILE = 7;
NO_SUCH_FILE = 8; PARTIAL_READ = 8;
PARTIAL_READ = 9; BAD_EPOCH = 9;
BAD_EPOCH = 10;
BAD_JOSS = 255; // Only for testing by the Taipan BAD_JOSS = 255; // Only for testing by the Taipan
} }
@ -87,14 +86,6 @@ message Mpb_ChunkCSum {
optional bytes csum = 2; optional bytes csum = 2;
} }
message Mpb_Chunk {
required uint64 offset = 1;
required string file_name = 2;
required bytes chunk = 3;
// TODO: must be required, in future?
optional Mpb_ChunkCSum csum = 4;
}
// epoch_id() type // epoch_id() type
message Mpb_EpochID { message Mpb_EpochID {
required uint32 epoch_number = 1; required uint32 epoch_number = 1;
@ -170,18 +161,11 @@ message Mpb_AuthResp {
// High level API: append_chunk() request & response // High level API: append_chunk() request & response
message Mpb_AppendChunkReq { message Mpb_AppendChunkReq {
// General namespace arguments optional bytes placement_key = 1;
/* In single chain/non-clustered environment, use namespace="" */ required string prefix = 2;
required string namespace = 1; required bytes chunk = 3;
required Mpb_ChunkCSum csum = 4;
required string prefix = 10; optional uint32 chunk_extra = 5;
required bytes chunk = 11;
required Mpb_ChunkCSum csum = 12;
optional uint32 chunk_extra = 20;
optional string preferred_file_name = 21;
/* Fail the operation if our preferred file name is not available */
optional bool flag_fail_preferred = 22 [default=false];
} }
message Mpb_AppendChunkResp { message Mpb_AppendChunkResp {
@ -193,7 +177,10 @@ message Mpb_AppendChunkResp {
// High level API: write_chunk() request & response // High level API: write_chunk() request & response
message Mpb_WriteChunkReq { message Mpb_WriteChunkReq {
required Mpb_Chunk chunk = 10; required string file = 1;
required uint64 offset = 2;
required bytes chunk = 3;
required Mpb_ChunkCSum csum = 4;
} }
message Mpb_WriteChunkResp { message Mpb_WriteChunkResp {
@ -203,34 +190,33 @@ message Mpb_WriteChunkResp {
// High level API: read_chunk() request & response // High level API: read_chunk() request & response
message Mpb_ReadChunkReq { message Mpb_ReadChunkReq {
// No namespace arguments are required because NS is embedded required string file = 1;
// inside of the file name. required uint64 offset = 2;
required uint32 size = 3;
required Mpb_ChunkPos chunk_pos = 10;
// Use flag_no_checksum=non-zero to skip returning the chunk's checksum. // Use flag_no_checksum=non-zero to skip returning the chunk's checksum.
// TODO: not implemented yet. // TODO: not implemented yet.
optional bool flag_no_checksum = 20 [default=false]; optional uint32 flag_no_checksum = 4 [default=0];
// Use flag_no_chunk=non-zero to skip returning the chunk (which // Use flag_no_chunk=non-zero to skip returning the chunk (which
// only makes sense if flag_no_checksum is not set). // only makes sense if flag_no_checksum is not set).
// TODO: not implemented yet. // TODO: not implemented yet.
optional bool flag_no_chunk = 21 [default=false]; optional uint32 flag_no_chunk = 5 [default=0];
// TODO: not implemented yet.
optional bool flag_needs_trimmed = 22 [default=false];
} }
message Mpb_ReadChunkResp { message Mpb_ReadChunkResp {
required Mpb_GeneralStatusCode status = 1; required Mpb_GeneralStatusCode status = 1;
repeated Mpb_Chunk chunks = 2; optional bytes chunk = 2;
repeated Mpb_ChunkPos trimmed = 3; optional Mpb_ChunkCSum csum = 3;
} }
// High level API: trim_chunk() request & response // High level API: trim_chunk() request & response
message Mpb_TrimChunkReq { message Mpb_TrimChunkReq {
required Mpb_ChunkPos chunk_pos = 1; required string file = 1;
required uint64 offset = 2;
required uint32 size = 3;
} }
message Mpb_TrimChunkResp { message Mpb_TrimChunkResp {
@ -254,8 +240,6 @@ message Mpb_ChecksumListResp {
// High level API: list_files() request & response // High level API: list_files() request & response
message Mpb_ListFilesReq { message Mpb_ListFilesReq {
// TODO: Add flag for file glob/regexp/other filter type
// TODO: What else could go wrong?
} }
message Mpb_ListFilesResp { message Mpb_ListFilesResp {
@ -342,17 +326,18 @@ message Mpb_ProjectionV1 {
required uint32 epoch_number = 1; required uint32 epoch_number = 1;
required bytes epoch_csum = 2; required bytes epoch_csum = 2;
required string author_server = 3; required string author_server = 3;
required string chain_name = 4; repeated string all_members = 4;
repeated string all_members = 5; repeated string witnesses = 5;
repeated string witnesses = 6; required Mpb_Now creation_time = 6;
required Mpb_Now creation_time = 7; required Mpb_Mode mode = 7;
required Mpb_Mode mode = 8; repeated string upi = 8;
repeated string upi = 9; repeated string repairing = 9;
repeated string repairing = 10; repeated string down = 10;
repeated string down = 11; optional bytes opaque_flap = 11;
required bytes opaque_dbg = 12; optional bytes opaque_inner = 12;
required bytes opaque_dbg2 = 13; required bytes opaque_dbg = 13;
repeated Mpb_MembersDictEntry members_dict = 14; required bytes opaque_dbg2 = 14;
repeated Mpb_MembersDictEntry members_dict = 15;
} }
////////////////////////////////////////// //////////////////////////////////////////
@ -367,7 +352,6 @@ message Mpb_ProjectionV1 {
// append_chunk() // append_chunk()
// write_chunk() // write_chunk()
// read_chunk() // read_chunk()
// trim_chunk()
// checksum_list() // checksum_list()
// list_files() // list_files()
// wedge_status() // wedge_status()
@ -388,20 +372,12 @@ message Mpb_ProjectionV1 {
// Low level API: append_chunk() // Low level API: append_chunk()
message Mpb_LL_AppendChunkReq { message Mpb_LL_AppendChunkReq {
// General namespace arguments required Mpb_EpochID epoch_id = 1;
required uint32 namespace_version = 1; optional bytes placement_key = 2;
required string namespace = 2; required string prefix = 3;
required uint32 locator = 3; required bytes chunk = 4;
required Mpb_ChunkCSum csum = 5;
required Mpb_EpochID epoch_id = 10; optional uint32 chunk_extra = 6;
required string prefix = 11;
required bytes chunk = 12;
required Mpb_ChunkCSum csum = 13;
optional uint32 chunk_extra = 20;
optional string preferred_file_name = 21;
/* Fail the operation if our preferred file name is not available */
optional bool flag_fail_preferred = 22 [default=false];
} }
message Mpb_LL_AppendChunkResp { message Mpb_LL_AppendChunkResp {
@ -413,12 +389,11 @@ message Mpb_LL_AppendChunkResp {
// Low level API: write_chunk() // Low level API: write_chunk()
message Mpb_LL_WriteChunkReq { message Mpb_LL_WriteChunkReq {
// General namespace arguments required Mpb_EpochID epoch_id = 1;
required uint32 namespace_version = 1; required string file = 2;
required string namespace = 2; required uint64 offset = 3;
required bytes chunk = 4;
required Mpb_EpochID epoch_id = 10; required Mpb_ChunkCSum csum = 5;
required Mpb_Chunk chunk = 11;
} }
message Mpb_LL_WriteChunkResp { message Mpb_LL_WriteChunkResp {
@ -428,54 +403,32 @@ message Mpb_LL_WriteChunkResp {
// Low level API: read_chunk() // Low level API: read_chunk()
message Mpb_LL_ReadChunkReq { message Mpb_LL_ReadChunkReq {
// General namespace arguments required Mpb_EpochID epoch_id = 1;
required uint32 namespace_version = 1; required string file = 2;
required string namespace = 2; required uint64 offset = 3;
required uint32 size = 4;
required Mpb_EpochID epoch_id = 10;
required Mpb_ChunkPos chunk_pos = 11;
// Use flag_no_checksum=non-zero to skip returning the chunk's checksum. // Use flag_no_checksum=non-zero to skip returning the chunk's checksum.
// TODO: not implemented yet. // TODO: not implemented yet.
optional bool flag_no_checksum = 20 [default=false]; optional uint32 flag_no_checksum = 5 [default=0];
// Use flag_no_chunk=non-zero to skip returning the chunk (which // Use flag_no_chunk=non-zero to skip returning the chunk (which
// only makes sense if flag_checksum is not set). // only makes sense if flag_checksum is not set).
// TODO: not implemented yet. // TODO: not implemented yet.
optional bool flag_no_chunk = 21 [default=false]; optional uint32 flag_no_chunk = 6 [default=0];
optional bool flag_needs_trimmed = 22 [default=false];
} }
message Mpb_LL_ReadChunkResp { message Mpb_LL_ReadChunkResp {
required Mpb_GeneralStatusCode status = 1; required Mpb_GeneralStatusCode status = 1;
repeated Mpb_Chunk chunks = 2; optional bytes chunk = 2;
repeated Mpb_ChunkPos trimmed = 3; optional Mpb_ChunkCSum csum = 3;
}
// Low level API: trim_chunk()
message Mpb_LL_TrimChunkReq {
// General namespace arguments
required uint32 namespace_version = 1;
required string namespace = 2;
required Mpb_EpochID epoch_id = 10;
required string file = 11;
required uint64 offset = 12;
required uint32 size = 13;
optional bool trigger_gc = 20 [default=false];
}
message Mpb_LL_TrimChunkResp {
required Mpb_GeneralStatusCode status = 1;
} }
// Low level API: checksum_list() // Low level API: checksum_list()
message Mpb_LL_ChecksumListReq { message Mpb_LL_ChecksumListReq {
required string file = 1; required Mpb_EpochID epoch_id = 1;
required string file = 2;
} }
message Mpb_LL_ChecksumListResp { message Mpb_LL_ChecksumListResp {
@ -506,9 +459,7 @@ message Mpb_LL_WedgeStatusReq {
message Mpb_LL_WedgeStatusResp { message Mpb_LL_WedgeStatusResp {
required Mpb_GeneralStatusCode status = 1; required Mpb_GeneralStatusCode status = 1;
optional Mpb_EpochID epoch_id = 2; optional Mpb_EpochID epoch_id = 2;
optional bool wedged_flag = 3; optional uint32 wedged_flag = 3;
optional uint32 namespace_version = 4;
optional string namespace = 5;
} }
// Low level API: delete_migration() // Low level API: delete_migration()
@ -637,12 +588,11 @@ message Mpb_LL_Request {
optional Mpb_LL_AppendChunkReq append_chunk = 30; optional Mpb_LL_AppendChunkReq append_chunk = 30;
optional Mpb_LL_WriteChunkReq write_chunk = 31; optional Mpb_LL_WriteChunkReq write_chunk = 31;
optional Mpb_LL_ReadChunkReq read_chunk = 32; optional Mpb_LL_ReadChunkReq read_chunk = 32;
optional Mpb_LL_TrimChunkReq trim_chunk = 33; optional Mpb_LL_ChecksumListReq checksum_list = 33;
optional Mpb_LL_ChecksumListReq checksum_list = 34; optional Mpb_LL_ListFilesReq list_files = 34;
optional Mpb_LL_ListFilesReq list_files = 35; optional Mpb_LL_WedgeStatusReq wedge_status = 35;
optional Mpb_LL_WedgeStatusReq wedge_status = 36; optional Mpb_LL_DeleteMigrationReq delete_migration = 36;
optional Mpb_LL_DeleteMigrationReq delete_migration = 37; optional Mpb_LL_TruncHackReq trunc_hack = 37;
optional Mpb_LL_TruncHackReq trunc_hack = 38;
} }
message Mpb_LL_Response { message Mpb_LL_Response {
@ -672,10 +622,9 @@ message Mpb_LL_Response {
optional Mpb_LL_AppendChunkResp append_chunk = 30; optional Mpb_LL_AppendChunkResp append_chunk = 30;
optional Mpb_LL_WriteChunkResp write_chunk = 31; optional Mpb_LL_WriteChunkResp write_chunk = 31;
optional Mpb_LL_ReadChunkResp read_chunk = 32; optional Mpb_LL_ReadChunkResp read_chunk = 32;
optional Mpb_LL_TrimChunkResp trim_chunk = 33; optional Mpb_LL_ChecksumListResp checksum_list = 33;
optional Mpb_LL_ChecksumListResp checksum_list = 34; optional Mpb_LL_ListFilesResp list_files = 34;
optional Mpb_LL_ListFilesResp list_files = 35; optional Mpb_LL_WedgeStatusResp wedge_status = 35;
optional Mpb_LL_WedgeStatusResp wedge_status = 36; optional Mpb_LL_DeleteMigrationResp delete_migration = 36;
optional Mpb_LL_DeleteMigrationResp delete_migration = 37; optional Mpb_LL_TruncHackResp trunc_hack = 37;
optional Mpb_LL_TruncHackResp trunc_hack = 38;
} }

View file

@ -73,13 +73,8 @@ verify_file_checksums_local2(Sock1, EpochID, Path0) ->
{ok, FH} -> {ok, FH} ->
File = re:replace(Path, ".*/", "", [{return, binary}]), File = re:replace(Path, ".*/", "", [{return, binary}]),
try try
ReadChunk = fun(F, Offset, Size) -> ReadChunk = fun(_File, Offset, Size) ->
case file:pread(FH, Offset, Size) of file:pread(FH, Offset, Size)
{ok, Bin} ->
{ok, {[{F, Offset, Bin, undefined}], []}};
Err ->
Err
end
end, end,
verify_file_checksums_common(Sock1, EpochID, File, ReadChunk) verify_file_checksums_common(Sock1, EpochID, File, ReadChunk)
after after
@ -90,18 +85,17 @@ verify_file_checksums_local2(Sock1, EpochID, Path0) ->
end. end.
verify_file_checksums_remote2(Sock1, EpochID, File) -> verify_file_checksums_remote2(Sock1, EpochID, File) ->
NSInfo = undefined,
ReadChunk = fun(File_name, Offset, Size) -> ReadChunk = fun(File_name, Offset, Size) ->
?FLU_C:read_chunk(Sock1, NSInfo, EpochID, ?FLU_C:read_chunk(Sock1, EpochID,
File_name, Offset, Size, undefined) File_name, Offset, Size)
end, end,
verify_file_checksums_common(Sock1, EpochID, File, ReadChunk). verify_file_checksums_common(Sock1, EpochID, File, ReadChunk).
verify_file_checksums_common(Sock1, _EpochID, File, ReadChunk) -> verify_file_checksums_common(Sock1, EpochID, File, ReadChunk) ->
try try
case ?FLU_C:checksum_list(Sock1, File) of case ?FLU_C:checksum_list(Sock1, EpochID, File) of
{ok, InfoBin} -> {ok, InfoBin} ->
Info = machi_csum_table:split_checksum_list_blob_decode(InfoBin), {Info, _} = machi_csum_table:split_checksum_list_blob_decode(InfoBin),
Res = lists:foldl(verify_chunk_checksum(File, ReadChunk), Res = lists:foldl(verify_chunk_checksum(File, ReadChunk),
[], Info), [], Info),
{ok, Res}; {ok, Res};
@ -116,11 +110,9 @@ verify_file_checksums_common(Sock1, _EpochID, File, ReadChunk) ->
end. end.
verify_chunk_checksum(File, ReadChunk) -> verify_chunk_checksum(File, ReadChunk) ->
fun({0, ?MINIMUM_OFFSET, none}, []) -> fun({Offset, Size, <<_Tag:1/binary, CSum/binary>>}, Acc) ->
[];
({Offset, Size, <<_Tag:1/binary, CSum/binary>>}, Acc) ->
case ReadChunk(File, Offset, Size) of case ReadChunk(File, Offset, Size) of
{ok, {[{_, Offset, Chunk, _}], _}} -> {ok, Chunk} ->
CSum2 = machi_util:checksum_chunk(Chunk), CSum2 = machi_util:checksum_chunk(Chunk),
if CSum == CSum2 -> if CSum == CSum2 ->
Acc; Acc;

View file

@ -36,8 +36,12 @@
-export([start/2, stop/1]). -export([start/2, stop/1]).
start(_StartType, _StartArgs) -> start(_StartType, _StartArgs) ->
machi_cinfo:register(), case machi_sup:start_link() of
machi_sup:start_link(). {ok, Pid} ->
{ok, Pid};
Error ->
Error
end.
stop(_State) -> stop(_State) ->
ok. ok.

View file

@ -1,6 +1,6 @@
%% ------------------------------------------------------------------- %% -------------------------------------------------------------------
%% %%
%% Copyright (c) 2007-2016 Basho Technologies, Inc. All Rights Reserved. %% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%% %%
%% This file is provided to you under the Apache License, %% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file %% Version 2.0 (the "License"); you may not use this file
@ -43,25 +43,23 @@
%% could add new entries to this ETS table. %% could add new entries to this ETS table.
%% %%
%% Now we can use various integer-centric key generators that are %% Now we can use various integer-centric key generators that are
%% already bundled with basho_bench. NOTE: this scheme does not allow %% already bundled with basho_bench.
%% mixing of 'append' and 'read' operations in the same config. Basho
%% Bench does not support different key generators for different
%% operations, unfortunately. The work-around is to run two different
%% Basho Bench instances: on for 'append' ops with a key generator for
%% the desired prefix(es), and the other for 'read' ops with an
%% integer key generator.
%% %%
%% TODO: The 'read' operator will always read chunks at exactly the %% TODO: Add CRC checking, when feasible and when supported on the
%% byte offset & size as the original append/write ops. If reads are %% server side.
%% desired at any arbitrary offset & size, then a new strategy is %%
%% required. %% TODO: As an alternate idea, if we know that the chunks written are
%% always the same size, and if we don't care about CRC checking, then
%% all we need to know are the file names &amp; file sizes on the server:
%% we can then pick any valid offset within that file. That would
%% certainly be more scalable than the zillion-row-ETS-table, which is
%% definitely RAM-hungry.
-module(machi_basho_bench_driver). -module(machi_basho_bench_driver).
-export([new/1, run/4]). -export([new/1, run/4]).
-record(m, { -record(m, {
id,
conn, conn,
max_key max_key
}). }).
@ -83,7 +81,7 @@ new(Id) ->
{read_concurrency, true}]), {read_concurrency, true}]),
ets:insert(ETS, {max_key, 0}), ets:insert(ETS, {max_key, 0}),
ets:insert(ETS, {total_bytes, 0}), ets:insert(ETS, {total_bytes, 0}),
MaxKeys = load_ets_table_maybe(Conn, ETS), MaxKeys = load_ets_table(Conn, ETS),
?INFO("Key preload: finished, ~w keys loaded", [MaxKeys]), ?INFO("Key preload: finished, ~w keys loaded", [MaxKeys]),
Bytes = ets:lookup_element(ETS, total_bytes, 2), Bytes = ets:lookup_element(ETS, total_bytes, 2),
?INFO("Key preload: finished, chunk list specifies ~s MBytes of chunks", ?INFO("Key preload: finished, chunk list specifies ~s MBytes of chunks",
@ -92,14 +90,12 @@ new(Id) ->
true -> true ->
ok ok
end, end,
{ok, #m{id=Id, conn=Conn}}. {ok, #m{conn=Conn}}.
run(append, KeyGen, ValueGen, #m{conn=Conn}=S) -> run(append, KeyGen, ValueGen, #m{conn=Conn}=S) ->
Prefix = KeyGen(), Prefix = KeyGen(),
Value = ValueGen(), Value = ValueGen(),
CSum = machi_util:make_client_csum(Value), case machi_cr_client:append_chunk(Conn, Prefix, Value, ?THE_TIMEOUT) of
AppendOpts = {append_opts,0,undefined,false}, % HACK FIXME
case machi_cr_client:append_chunk(Conn, undefined, Prefix, Value, CSum, AppendOpts, ?THE_TIMEOUT) of
{ok, Pos} -> {ok, Pos} ->
EtsKey = ets:update_counter(?ETS_TAB, max_key, 1), EtsKey = ets:update_counter(?ETS_TAB, max_key, 1),
true = ets:insert(?ETS_TAB, {EtsKey, Pos}), true = ets:insert(?ETS_TAB, {EtsKey, Pos}),
@ -116,26 +112,9 @@ run(read, KeyGen, _ValueGen, #m{conn=Conn, max_key=MaxKey}=S) ->
Idx = KeyGen() rem MaxKey, Idx = KeyGen() rem MaxKey,
%% {File, Offset, Size, _CSum} = ets:lookup_element(?ETS_TAB, Idx, 2), %% {File, Offset, Size, _CSum} = ets:lookup_element(?ETS_TAB, Idx, 2),
{File, Offset, Size} = ets:lookup_element(?ETS_TAB, Idx, 2), {File, Offset, Size} = ets:lookup_element(?ETS_TAB, Idx, 2),
ReadOpts = {read_opts,false,false,false}, % HACK FIXME case machi_cr_client:read_chunk(Conn, File, Offset, Size, ?THE_TIMEOUT) of
case machi_cr_client:read_chunk(Conn, undefined, File, Offset, Size, ReadOpts, ?THE_TIMEOUT) of {ok, _Chunk} ->
{ok, {Chunks, _Trimmed}} ->
%% io:format(user, "Chunks ~P\n", [Chunks, 15]),
%% {ok, S};
case lists:all(fun({File2, Offset2, Chunk, CSum}) ->
{_Tag, CS} = machi_util:unmake_tagged_csum(CSum),
CS2 = machi_util:checksum_chunk(Chunk),
if CS == CS2 ->
true;
CS /= CS2 ->
?ERROR("Client-side checksum error for file ~p offset ~p expected ~p got ~p\n", [File2, Offset2, CS, CS2]),
false
end
end, Chunks) of
true ->
{ok, S}; {ok, S};
false ->
{error, bad_checksum, S}
end;
{error, _}=Err -> {error, _}=Err ->
?ERROR("read file ~p offset ~w size ~w: ~w\n", ?ERROR("read file ~p offset ~w size ~w: ~w\n",
[File, Offset, Size, Err]), [File, Offset, Size, Err]),
@ -153,40 +132,21 @@ find_server_info(_Id) ->
Ps Ps
end. end.
load_ets_table_maybe(Conn, ETS) ->
case basho_bench_config:get(operations, undefined) of
undefined ->
?ERROR("The 'operations' key is missing from the config file, aborting", []),
exit(bad_config);
Ops when is_list(Ops) ->
case lists:keyfind(read, 1, Ops) of
{read,_} ->
load_ets_table(Conn, ETS);
false ->
?INFO("No 'read' op in the 'operations' list ~p, skipping ETS table load.", [Ops]),
0
end
end.
load_ets_table(Conn, ETS) -> load_ets_table(Conn, ETS) ->
{ok, Fs} = machi_cr_client:list_files(Conn), {ok, Fs} = machi_cr_client:list_files(Conn),
[begin [begin
{ok, InfoBin} = machi_cr_client:checksum_list(Conn, File, ?THE_TIMEOUT), {ok, InfoBin} = machi_cr_client:checksum_list(Conn, File),
PosList = machi_csum_table:split_checksum_list_blob_decode(InfoBin), {PosList, _} = machi_flu1:split_checksum_list_blob_decode(InfoBin),
?INFO("File ~s len PosList ~p\n", [File, length(PosList)]),
StartKey = ets:update_counter(ETS, max_key, 0), StartKey = ets:update_counter(ETS, max_key, 0),
{_, C, Bytes} = lists:foldl(fun({_Off,0,_CSum}, {_K, _C, _Bs}=Acc) -> %% _EndKey = lists:foldl(fun({Off,Sz,CSum}, K) ->
Acc; %% V = {File, Off, Sz, CSum},
({0,_Sz,_CSum}, {_K, _C, _Bs}=Acc) -> {_, Bytes} = lists:foldl(fun({Off,Sz,_CSum}, {K, Bs}) ->
Acc;
({Off,Sz,_CSum}, {K, C, Bs}) ->
V = {File, Off, Sz}, V = {File, Off, Sz},
ets:insert(ETS, {K, V}), ets:insert(ETS, {K, V}),
{K + 1, C + 1, Bs + Sz} {K + 1, Bs + Sz}
end, {StartKey, 0, 0}, PosList), end, {StartKey, 0}, PosList),
_ = ets:update_counter(ETS, max_key, C), ets:update_counter(ETS, max_key, length(PosList)),
_ = ets:update_counter(ETS, total_bytes, Bytes), ets:update_counter(ETS, total_bytes, Bytes)
ok
end || {_Size, File} <- Fs], end || {_Size, File} <- Fs],
ets:update_counter(?ETS_TAB, max_key, 0). ets:update_counter(?ETS_TAB, max_key, 0).

View file

@ -92,11 +92,8 @@
-define(REPAIR_START_STABILITY_TIME, 10). -define(REPAIR_START_STABILITY_TIME, 10).
-endif. % TEST -endif. % TEST
%% Maximum length of the history of adopted projections (via C120). %% Magic constant for looping "too frequently" breaker. TODO revisit & revise.
-define(MAX_HISTORY_LENGTH, 8). -define(TOO_FREQUENT_BREAKER, 10).
%% Magic constant for looping "too frequently" breaker.
-define(TOO_FREQUENT_BREAKER, (?MAX_HISTORY_LENGTH+5)).
-define(RETURN2(X), begin (catch put(why2, [?LINE|get(why2)])), X end). -define(RETURN2(X), begin (catch put(why2, [?LINE|get(why2)])), X end).
@ -106,19 +103,20 @@
%% Amount of epoch number skip-ahead for set_chain_members call %% Amount of epoch number skip-ahead for set_chain_members call
-define(SET_CHAIN_MEMBERS_EPOCH_SKIP, 1111). -define(SET_CHAIN_MEMBERS_EPOCH_SKIP, 1111).
%% Maximum length of the history of adopted projections (via C120).
-define(MAX_HISTORY_LENGTH, 30).
%% API %% API
-export([start_link/2, start_link/3, stop/1, ping/1, -export([start_link/2, start_link/3, stop/1, ping/1,
set_chain_members/2, set_chain_members/6, set_active/2, set_chain_members/2, set_chain_members/3, set_active/2,
trigger_react_to_env/1]). trigger_react_to_env/1]).
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, -export([init/1, handle_call/3, handle_cast/2, handle_info/2,
terminate/2, format_status/2, code_change/3]). terminate/2, code_change/3]).
-export([make_chmgr_regname/1, projection_transitions_are_sane/2, -export([make_chmgr_regname/1, projection_transitions_are_sane/2,
simple_chain_state_transition_is_sane/3, simple_chain_state_transition_is_sane/3,
simple_chain_state_transition_is_sane/5, simple_chain_state_transition_is_sane/5,
chain_state_transition_is_sane/6]). chain_state_transition_is_sane/6]).
-export([perhaps_call/5, % for partition simulator use w/machi_fitness
init_remember_down_list/0]).
%% Exports so that EDoc docs generated for these internal funcs. %% Exports so that EDoc docs generated for these internal funcs.
-export([mk/3]). -export([mk/3]).
@ -131,7 +129,8 @@
-export([test_calc_projection/2, -export([test_calc_projection/2,
test_write_public_projection/2, test_write_public_projection/2,
test_read_latest_public_projection/2]). test_read_latest_public_projection/2]).
-export([update_remember_down_list/1, -export([perhaps_call/5, % for partition simulator use w/machi_fitness
init_remember_down_list/0, update_remember_down_list/1,
get_remember_down_list/0]). get_remember_down_list/0]).
-ifdef(EQC). -ifdef(EQC).
@ -168,22 +167,13 @@ ping(Pid) ->
%% with lowest rank, i.e. name z* first, name a* last. %% with lowest rank, i.e. name z* first, name a* last.
set_chain_members(Pid, MembersDict) -> set_chain_members(Pid, MembersDict) ->
set_chain_members(Pid, ch0_name, 0, ap_mode, MembersDict, []). set_chain_members(Pid, MembersDict, []).
set_chain_members(Pid, ChainName, OldEpoch, CMode, MembersDict, Witness_list) set_chain_members(Pid, MembersDict, Witness_list) ->
when is_atom(ChainName) andalso case lists:all(fun(Witness) -> orddict:is_key(Witness, MembersDict) end,
is_integer(OldEpoch) andalso OldEpoch >= 0 andalso
(CMode == ap_mode orelse CMode == cp_mode) andalso
is_list(MembersDict) andalso
is_list(Witness_list) ->
case lists:all(fun({X, #p_srvr{name=X}}) -> true;
(_) -> false
end, MembersDict)
andalso
lists:all(fun(Witness) -> orddict:is_key(Witness, MembersDict) end,
Witness_list) of Witness_list) of
true -> true ->
Cmd = {set_chain_members, ChainName, OldEpoch, CMode, MembersDict, Witness_list}, Cmd = {set_chain_members, MembersDict, Witness_list},
gen_server:call(Pid, Cmd, infinity); gen_server:call(Pid, Cmd, infinity);
false -> false ->
{error, bad_arg} {error, bad_arg}
@ -234,13 +224,11 @@ test_read_latest_public_projection(Pid, ReadRepairP) ->
%% manager's pid in MgrOpts and use direct gen_server calls to the %% manager's pid in MgrOpts and use direct gen_server calls to the
%% local projection store. %% local projection store.
init({MyName, InitMembersDict, MgrOpts0}) -> init({MyName, InitMembersDict, MgrOpts}) ->
put(ttt, [?LINE]), put(ttt, [?LINE]),
_ = random:seed(now()), random:seed(now()),
init_remember_down_list(), init_remember_down_list(),
MgrOpts = MgrOpts0 ++ application:get_env(machi, chain_manager_opts, []),
Opt = fun(Key, Default) -> proplists:get_value(Key, MgrOpts, Default) end, Opt = fun(Key, Default) -> proplists:get_value(Key, MgrOpts, Default) end,
InitWitness_list = Opt(witnesses, []), InitWitness_list = Opt(witnesses, []),
ZeroAll_list = [P#p_srvr.name || {_,P} <- orddict:to_list(InitMembersDict)], ZeroAll_list = [P#p_srvr.name || {_,P} <- orddict:to_list(InitMembersDict)],
ZeroProj = make_none_projection(0, MyName, ZeroAll_list, ZeroProj = make_none_projection(0, MyName, ZeroAll_list,
@ -292,7 +280,7 @@ init({MyName, InitMembersDict, MgrOpts0}) ->
last_down=[no_such_server_initial_value_only], last_down=[no_such_server_initial_value_only],
fitness_svr=machi_flu_psup:make_fitness_regname(MyName) fitness_svr=machi_flu_psup:make_fitness_regname(MyName)
}, Proj), }, Proj),
S2 = do_set_chain_members_dict(MembersDict, S), {_, S2} = do_set_chain_members_dict(MembersDict, S),
S3 = if ActiveP == false -> S3 = if ActiveP == false ->
S2; S2;
ActiveP == true -> ActiveP == true ->
@ -302,17 +290,12 @@ init({MyName, InitMembersDict, MgrOpts0}) ->
handle_call({ping}, _From, S) -> handle_call({ping}, _From, S) ->
{reply, pong, S}; {reply, pong, S};
handle_call({set_chain_members, SetChainName, SetOldEpoch, CMode, handle_call({set_chain_members, MembersDict, Witness_list}, _From,
MembersDict, Witness_list}, _From,
#ch_mgr{name=MyName, #ch_mgr{name=MyName,
proj=#projection_v1{all_members=OldAll_list, proj=#projection_v1{all_members=OldAll_list,
epoch_number=OldEpoch, epoch_number=OldEpoch,
chain_name=ChainName,
upi=OldUPI}=OldProj}=S) -> upi=OldUPI}=OldProj}=S) ->
true = (OldEpoch == 0) % in this case we want unconditional set of ch name {Reply, S2} = do_set_chain_members_dict(MembersDict, S),
orelse
(SetOldEpoch == OldEpoch andalso SetChainName == ChainName),
S2 = do_set_chain_members_dict(MembersDict, S),
%% TODO: should there be any additional sanity checks? Right now, %% TODO: should there be any additional sanity checks? Right now,
%% if someone does something bad, then do_react_to_env() will %% if someone does something bad, then do_react_to_env() will
%% crash, which will crash us, and we'll restart in a sane & old %% crash, which will crash us, and we'll restart in a sane & old
@ -326,10 +309,10 @@ handle_call({set_chain_members, SetChainName, SetOldEpoch, CMode,
{NUPI, All_list -- NUPI} {NUPI, All_list -- NUPI}
end, end,
NewEpoch = OldEpoch + ?SET_CHAIN_MEMBERS_EPOCH_SKIP, NewEpoch = OldEpoch + ?SET_CHAIN_MEMBERS_EPOCH_SKIP,
CMode = calc_consistency_mode(Witness_list),
ok = set_consistency_mode(machi_flu_psup:make_proj_supname(MyName), CMode), ok = set_consistency_mode(machi_flu_psup:make_proj_supname(MyName), CMode),
NewProj = machi_projection:update_checksum( NewProj = machi_projection:update_checksum(
OldProj#projection_v1{author_server=MyName, OldProj#projection_v1{author_server=MyName,
chain_name=SetChainName,
creation_time=now(), creation_time=now(),
mode=CMode, mode=CMode,
epoch_number=NewEpoch, epoch_number=NewEpoch,
@ -341,11 +324,7 @@ handle_call({set_chain_members, SetChainName, SetOldEpoch, CMode,
members_dict=MembersDict}), members_dict=MembersDict}),
S3 = set_proj(S2#ch_mgr{proj_history=queue:new(), S3 = set_proj(S2#ch_mgr{proj_history=queue:new(),
consistency_mode=CMode}, NewProj), consistency_mode=CMode}, NewProj),
{Res, S4} = do_react_to_env(S3), {_QQ, S4} = do_react_to_env(S3),
Reply = case Res of
{_,_,_} -> ok
% Dialyzer says that all vals match with the 3-tuple pattern
end,
{reply, Reply, S4}; {reply, Reply, S4};
handle_call({set_active, Boolean}, _From, #ch_mgr{timer=TRef}=S) -> handle_call({set_active, Boolean}, _From, #ch_mgr{timer=TRef}=S) ->
case {Boolean, TRef} of case {Boolean, TRef} of
@ -377,8 +356,8 @@ handle_call({test_read_latest_public_projection, ReadRepairP}, _From, S) ->
{reply, Res, S2}; {reply, Res, S2};
handle_call({trigger_react_to_env}=Call, _From, S) -> handle_call({trigger_react_to_env}=Call, _From, S) ->
gobble_calls(Call), gobble_calls(Call),
{Res, S2} = do_react_to_env(S), {TODOtodo, S2} = do_react_to_env(S),
{reply, Res, S2}; {reply, TODOtodo, S2};
handle_call(_Call, _From, S) -> handle_call(_Call, _From, S) ->
io:format(user, "\nBad call to ~p: ~p\n", [S#ch_mgr.name, _Call]), io:format(user, "\nBad call to ~p: ~p\n", [S#ch_mgr.name, _Call]),
{reply, whaaaaaaaaaa, S}. {reply, whaaaaaaaaaa, S}.
@ -390,7 +369,6 @@ handle_cast(_Cast, S) ->
handle_info(tick_check_environment, #ch_mgr{ignore_timer=true}=S) -> handle_info(tick_check_environment, #ch_mgr{ignore_timer=true}=S) ->
{noreply, S}; {noreply, S};
handle_info(tick_check_environment, S) -> handle_info(tick_check_environment, S) ->
gobble_ticks(),
{{_Delta, Props, _Epoch}, S1} = do_react_to_env(S), {{_Delta, Props, _Epoch}, S1} = do_react_to_env(S),
S2 = sanitize_repair_state(S1), S2 = sanitize_repair_state(S1),
S3 = perhaps_start_repair(S2), S3 = perhaps_start_repair(S2),
@ -422,11 +400,6 @@ handle_info(Msg, #ch_mgr{name=MyName}=S) ->
terminate(_Reason, _S) -> terminate(_Reason, _S) ->
ok. ok.
format_status(_Opt, [_PDict, Status]) ->
Fields = record_info(fields, ch_mgr),
[_Name | Values] = tuple_to_list(Status),
lists:zip(Fields, Values).
code_change(_OldVsn, S, _Extra) -> code_change(_OldVsn, S, _Extra) ->
{ok, S}. {ok, S}.
@ -463,7 +436,7 @@ get_my_proj_boot_info(MgrOpts, DefaultDict, DefaultProj, ProjType) ->
{DefaultDict, DefaultProj}; {DefaultDict, DefaultProj};
Store -> Store ->
{ok, P} = machi_projection_store:read_latest_projection(Store, {ok, P} = machi_projection_store:read_latest_projection(Store,
ProjType, 7789), ProjType),
{P#projection_v1.members_dict, P} {P#projection_v1.members_dict, P}
end. end.
@ -556,7 +529,6 @@ cl_write_public_proj2(FLUs, Partitions, Epoch, Proj, IgnoreWrittenErrorP, S) ->
end end
end, {true, []}, FLUs), end, {true, []}, FLUs),
%% io:format(user, "\nWrite public ~w by ~w: ~w\n", [Epoch, S#ch_mgr.name, Rs]), %% io:format(user, "\nWrite public ~w by ~w: ~w\n", [Epoch, S#ch_mgr.name, Rs]),
%% io:format(user, "mgr ~w epoch ~w Rs ~p\n", [S#ch_mgr.name, Epoch, Rs]),
{{remote_write_results, Rs}, S}. {{remote_write_results, Rs}, S}.
do_cl_read_latest_public_projection(ReadRepairP, do_cl_read_latest_public_projection(ReadRepairP,
@ -578,41 +550,12 @@ do_cl_read_latest_public_projection(ReadRepairP,
read_latest_projection_call_only(ProjectionType, AllHosed, read_latest_projection_call_only(ProjectionType, AllHosed,
#ch_mgr{proj=CurrentProj}=S) -> #ch_mgr{proj=CurrentProj}=S) ->
#projection_v1{all_members=All_list} = CurrentProj, #projection_v1{all_members=All_list} = CurrentProj,
All_queried_list = lists:sort(All_list -- AllHosed), All_queried_list = All_list -- AllHosed,
read_latest_projection_call_only1(ProjectionType, AllHosed,
All_queried_list, S).
read_latest_projection_call_only1(ProjectionType, AllHosed, {Rs, S2} = read_latest_projection_call_only2(ProjectionType,
All_queried_list, S) ->
{Rs_tmp, S2} = read_latest_projection_call_only2(ProjectionType,
All_queried_list, S), All_queried_list, S),
New_all_maybe = FLUsRs = lists:zip(All_queried_list, Rs),
lists:usort( {All_queried_list, FLUsRs, S2}.
lists:flatten(
[A_l || #projection_v1{all_members=A_l} <- Rs_tmp])) -- AllHosed,
case New_all_maybe -- All_queried_list of
[] ->
FLUsRs = lists:zip(All_queried_list, Rs_tmp),
{All_queried_list, FLUsRs, S2};
[AnotherFLU|_] ->
%% Stop AnotherFLU proxy, in unexpected case where it's open
try
Proxy = proxy_pid(AnotherFLU, S2),
?FLU_PC:stop_proxies([Proxy])
catch _:_ -> ok
end,
MD = orddict:from_list(
lists:usort(
lists:flatten(
[orddict:to_list(D) || #projection_v1{members_dict=D} <- Rs_tmp]))),
Another_P_srvr = orddict:fetch(AnotherFLU, MD),
{ok, Proxy2} = ?FLU_PC:start_link(Another_P_srvr),
S3 = S2#ch_mgr{proxies_dict=orddict:store(AnotherFLU, Proxy2,
S2#ch_mgr.proxies_dict)},
read_latest_projection_call_only1(
ProjectionType, AllHosed,
lists:usort([AnotherFLU|All_queried_list]), S3)
end.
read_latest_projection_call_only2(ProjectionType, All_queried_list, S) -> read_latest_projection_call_only2(ProjectionType, All_queried_list, S) ->
{_UpNodes, Partitions, S2} = calc_up_nodes(S), {_UpNodes, Partitions, S2} = calc_up_nodes(S),
@ -652,8 +595,6 @@ rank_and_sort_projections_with_extra(All_queried_list, FLUsRs, ProjectionType,
Witness_list = CurrentProj#projection_v1.witnesses, Witness_list = CurrentProj#projection_v1.witnesses,
NoneProj = make_none_projection(0, MyName, [], Witness_list, NoneProj = make_none_projection(0, MyName, [], Witness_list,
orddict:new()), orddict:new()),
ChainName = CurrentProj#projection_v1.chain_name,
NoneProj2 = NoneProj#projection_v1{chain_name=ChainName},
Extra2 = [{all_members_replied, true}, Extra2 = [{all_members_replied, true},
{all_queried_list, All_queried_list}, {all_queried_list, All_queried_list},
{flus_rs, FLUsRs}, {flus_rs, FLUsRs},
@ -662,7 +603,7 @@ rank_and_sort_projections_with_extra(All_queried_list, FLUsRs, ProjectionType,
{bad_answer_flus, BadAnswerFLUs}, {bad_answer_flus, BadAnswerFLUs},
{bad_answers, BadAnswers}, {bad_answers, BadAnswers},
{not_unanimous_answers, []}], {not_unanimous_answers, []}],
{not_unanimous, NoneProj2, Extra2, S}; {not_unanimous, NoneProj, Extra2, S};
ProjectionType == public, UnwrittenRs /= [] -> ProjectionType == public, UnwrittenRs /= [] ->
{needs_repair, FLUsRs, [flarfus], S}; {needs_repair, FLUsRs, [flarfus], S};
true -> true ->
@ -776,14 +717,13 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
runenv=RunEnv1, runenv=RunEnv1,
repair_final_status=RepairFS}=S) -> repair_final_status=RepairFS}=S) ->
#projection_v1{epoch_number=OldEpochNum, #projection_v1{epoch_number=OldEpochNum,
chain_name=ChainName,
members_dict=MembersDict, members_dict=MembersDict,
witnesses=OldWitness_list, witnesses=OldWitness_list,
upi=OldUPI_list, upi=OldUPI_list,
repairing=OldRepairing_list repairing=OldRepairing_list
} = LastProj, } = LastProj,
LastUp = lists:usort(OldUPI_list ++ OldRepairing_list), LastUp = lists:usort(OldUPI_list ++ OldRepairing_list),
AllMembers = CurrentProj#projection_v1.all_members, AllMembers = (S#ch_mgr.proj)#projection_v1.all_members,
{Up0, Partitions, RunEnv2} = calc_up_nodes(MyName, {Up0, Partitions, RunEnv2} = calc_up_nodes(MyName,
AllMembers, RunEnv1), AllMembers, RunEnv1),
Up = Up0 -- AllHosed, Up = Up0 -- AllHosed,
@ -840,10 +780,7 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
D_foo=[{repair_done, {repair_final_status, ok, (S#ch_mgr.proj)#projection_v1.epoch_number}}], D_foo=[{repair_done, {repair_final_status, ok, (S#ch_mgr.proj)#projection_v1.epoch_number}}],
{NewUPI_list ++ Repairing_list2, [], RunEnv2}; {NewUPI_list ++ Repairing_list2, [], RunEnv2};
true -> true ->
D_foo=[d_foo2, {sim_p,Simulator_p}, D_foo=[d_foo2],
{simr_p,SimRepair_p}, {same_epoch,SameEpoch_p},
{rel_to,RelativeToServer},
{repch,RepChk_LastInUPI}, {repair_fs,RepairFS}],
{NewUPI_list, OldRepairing_list, RunEnv2} {NewUPI_list, OldRepairing_list, RunEnv2}
end; end;
{_ABC, _XYZ} -> {_ABC, _XYZ} ->
@ -878,11 +815,10 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
end, end,
?REACT({calc,?LINE,[{new_upi, NewUPI},{new_rep, NewRepairing}]}), ?REACT({calc,?LINE,[{new_upi, NewUPI},{new_rep, NewRepairing}]}),
P0 = machi_projection:new(OldEpochNum + 1, P = machi_projection:new(OldEpochNum + 1,
MyName, MembersDict, Down, NewUPI, NewRepairing, MyName, MembersDict, Down, NewUPI, NewRepairing,
D_foo ++ D_foo ++
Dbg ++ [{ps, Partitions},{nodes_up, Up}]), Dbg ++ [{ps, Partitions},{nodes_up, Up}]),
P1 = P0#projection_v1{chain_name=ChainName},
P2 = if CMode == cp_mode -> P2 = if CMode == cp_mode ->
UpWitnesses = [W || W <- Up, lists:member(W, OldWitness_list)], UpWitnesses = [W || W <- Up, lists:member(W, OldWitness_list)],
Majority = full_majority_size(AllMembers), Majority = full_majority_size(AllMembers),
@ -891,7 +827,7 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
SoFar = length(NewUPI ++ NewRepairing), SoFar = length(NewUPI ++ NewRepairing),
if SoFar >= Majority -> if SoFar >= Majority ->
?REACT({calc,?LINE,[]}), ?REACT({calc,?LINE,[]}),
P1; P;
true -> true ->
Need = Majority - SoFar, Need = Majority - SoFar,
UpWitnesses = [W || W <- Up, UpWitnesses = [W || W <- Up,
@ -900,7 +836,7 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
Ws = lists:sublist(UpWitnesses, Need), Ws = lists:sublist(UpWitnesses, Need),
?REACT({calc,?LINE,[{ws, Ws}]}), ?REACT({calc,?LINE,[{ws, Ws}]}),
machi_projection:update_checksum( machi_projection:update_checksum(
P1#projection_v1{upi=Ws++NewUPI}); P#projection_v1{upi=Ws++NewUPI});
true -> true ->
?REACT({calc,?LINE,[]}), ?REACT({calc,?LINE,[]}),
P_none0 = make_none_projection( P_none0 = make_none_projection(
@ -913,7 +849,6 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
"Not enough witnesses are available now" "Not enough witnesses are available now"
end, end,
P_none1 = P_none0#projection_v1{ P_none1 = P_none0#projection_v1{
chain_name=ChainName,
%% Stable creation time! %% Stable creation time!
creation_time={1,2,3}, creation_time={1,2,3},
dbg=[{none_projection,true}, dbg=[{none_projection,true},
@ -934,7 +869,7 @@ calc_projection2(LastProj, RelativeToServer, AllHosed, Dbg,
end; end;
CMode == ap_mode -> CMode == ap_mode ->
?REACT({calc,?LINE,[]}), ?REACT({calc,?LINE,[]}),
P1 P
end, end,
P3 = machi_projection:update_checksum( P3 = machi_projection:update_checksum(
P2#projection_v1{mode=CMode, witnesses=OldWitness_list}), P2#projection_v1{mode=CMode, witnesses=OldWitness_list}),
@ -1086,33 +1021,31 @@ rank_projection(#projection_v1{author_server=_Author,
do_set_chain_members_dict(MembersDict, #ch_mgr{proxies_dict=OldProxiesDict}=S)-> do_set_chain_members_dict(MembersDict, #ch_mgr{proxies_dict=OldProxiesDict}=S)->
_ = ?FLU_PC:stop_proxies(OldProxiesDict), _ = ?FLU_PC:stop_proxies(OldProxiesDict),
ProxiesDict = ?FLU_PC:start_proxies(MembersDict), ProxiesDict = ?FLU_PC:start_proxies(MembersDict),
S#ch_mgr{members_dict=MembersDict, {ok, S#ch_mgr{members_dict=MembersDict,
proxies_dict=ProxiesDict}. proxies_dict=ProxiesDict}}.
do_react_to_env(#ch_mgr{name=MyName, do_react_to_env(#ch_mgr{name=MyName,
proj=#projection_v1{epoch_number=Epoch, proj=#projection_v1{epoch_number=Epoch,
members_dict=[]=OldDict}=OldProj, members_dict=[]=OldDict}=OldProj,
opts=Opts}=S) -> opts=Opts}=S) ->
put(ttt, [?LINE]),
%% Read from our local *public* projection store. If some other %% Read from our local *public* projection store. If some other
%% chain member has written something there, and if we are a %% chain member has written something there, and if we are a
%% member of that chain, then we'll adopt that projection and then %% member of that chain, then we'll adopt that projection and then
%% start actively humming in that chain. %% start actively humming in that chain.
{NewMD, NewProj} = {NewMembersDict, NewProj} =
get_my_public_proj_boot_info(Opts, OldDict, OldProj), get_my_public_proj_boot_info(Opts, OldDict, OldProj),
case orddict:is_key(MyName, NewMD) of case orddict:is_key(MyName, NewMembersDict) of
false -> false ->
{{empty_members_dict1, [], Epoch}, S}; {{empty_members_dict, [], Epoch}, S};
true -> true ->
CMode = NewProj#projection_v1.mode, {_, S2} = do_set_chain_members_dict(NewMembersDict, S),
S2 = do_set_chain_members_dict(NewMD, S), CMode = calc_consistency_mode(NewProj#projection_v1.witnesses),
{Reply, S3} = react_to_env_C110(NewProj, {{empty_members_dict, [], Epoch},
S2#ch_mgr{members_dict=NewMD, set_proj(S2#ch_mgr{members_dict=NewMembersDict,
consistency_mode=CMode}), consistency_mode=CMode}, NewProj)}
{Reply, S3}
end; end;
do_react_to_env(S) -> do_react_to_env(S) ->
put(ttt, [?LINE]), put(ttt, [?LINE]),
%% The not_sanes manager counting dictionary is not strictly %% The not_sanes manager counting dictionary is not strictly
%% limited to flapping scenarios. (Though the mechanism first %% limited to flapping scenarios. (Though the mechanism first
%% started as a way to deal with rare flapping scenarios.) %% started as a way to deal with rare flapping scenarios.)
@ -1211,7 +1144,7 @@ react_to_env_A10(S) ->
?REACT(a10), ?REACT(a10),
react_to_env_A20(0, poll_private_proj_is_upi_unanimous(S)). react_to_env_A20(0, poll_private_proj_is_upi_unanimous(S)).
react_to_env_A20(Retries, #ch_mgr{name=MyName, proj=P_current}=S) -> react_to_env_A20(Retries, #ch_mgr{name=MyName}=S) ->
?REACT(a20), ?REACT(a20),
init_remember_down_list(), init_remember_down_list(),
{UnanimousTag, P_latest, ReadExtra, S2} = {UnanimousTag, P_latest, ReadExtra, S2} =
@ -1239,34 +1172,17 @@ react_to_env_A20(Retries, #ch_mgr{name=MyName, proj=P_current}=S) ->
false when P_latest#projection_v1.epoch_number /= LastComplaint, false when P_latest#projection_v1.epoch_number /= LastComplaint,
P_latest#projection_v1.all_members /= [] -> P_latest#projection_v1.all_members /= [] ->
put(rogue_server_epoch, P_latest#projection_v1.epoch_number), put(rogue_server_epoch, P_latest#projection_v1.epoch_number),
error_logger:info_msg("Chain manager ~w found latest public " error_logger:info_msg("Chain manager ~p found latest public "
"projection ~w with author ~w has a " "projection ~p has author ~p has a "
"members list ~w that does not include me. " "members list ~p that does not include me.\n",
"We assume this is a result of administrator "
"action and will thus wedge ourselves until "
"we are re-added to the chain or shutdown.\n",
[S#ch_mgr.name, [S#ch_mgr.name,
P_latest#projection_v1.epoch_number, P_latest#projection_v1.epoch_number,
P_latest#projection_v1.author_server, P_latest#projection_v1.author_server,
P_latest#projection_v1.all_members]), P_latest#projection_v1.all_members]);
EpochID = machi_projection:make_epoch_id(P_current),
ProjStore = get_projection_store_pid_or_regname(S),
{ok, NotifyPid} = machi_projection_store:get_wedge_notify_pid(ProjStore),
_QQ = machi_flu1:update_wedge_state(NotifyPid, true, EpochID),
#projection_v1{epoch_number=Epoch,
chain_name=ChainName,
all_members=All_list,
witnesses=Witness_list,
members_dict=MembersDict} = P_current,
P_none0 = make_none_projection(Epoch,
MyName, All_list, Witness_list, MembersDict),
P_none = P_none0#projection_v1{chain_name=ChainName},
{{now_using,[],Epoch}, set_proj(S2, P_none)};
_ -> _ ->
react_to_env_A21(Retries, UnanimousTag, P_latest, ReadExtra, S2) ok
end. end,
react_to_env_A21(Retries, UnanimousTag, P_latest, ReadExtra, S) ->
%% The UnanimousTag isn't quite sufficient for our needs. We need %% The UnanimousTag isn't quite sufficient for our needs. We need
%% to determine if *all* of the UPI+Repairing FLUs are members of %% to determine if *all* of the UPI+Repairing FLUs are members of
%% the unanimous server replies. All Repairing FLUs should be up %% the unanimous server replies. All Repairing FLUs should be up
@ -1311,7 +1227,7 @@ react_to_env_A21(Retries, UnanimousTag, P_latest, ReadExtra, S) ->
true -> true ->
exit({badbad, UnanimousTag}) exit({badbad, UnanimousTag})
end, end,
react_to_env_A29(Retries, P_latest, LatestUnanimousP, ReadExtra, S). react_to_env_A29(Retries, P_latest, LatestUnanimousP, ReadExtra, S2).
react_to_env_A29(Retries, P_latest, LatestUnanimousP, _ReadExtra, react_to_env_A29(Retries, P_latest, LatestUnanimousP, _ReadExtra,
#ch_mgr{consistency_mode=CMode, #ch_mgr{consistency_mode=CMode,
@ -1345,6 +1261,7 @@ react_to_env_A29(Retries, P_latest, LatestUnanimousP, _ReadExtra,
?REACT({a29, ?LINE, ?REACT({a29, ?LINE,
[{zerf_backstop, true}, [{zerf_backstop, true},
{zerf_in, machi_projection:make_summary(Zerf)}]}), {zerf_in, machi_projection:make_summary(Zerf)}]}),
%% io:format(user, "zerf_in: A29: ~p: ~w\n\t~p\n", [MyName, machi_projection:make_summary(Zerf), get(yyy_hack)]),
#projection_v1{dbg=ZerfDbg} = Zerf, #projection_v1{dbg=ZerfDbg} = Zerf,
Backstop = if Zerf#projection_v1.upi == [] -> Backstop = if Zerf#projection_v1.upi == [] ->
[]; [];
@ -1364,8 +1281,7 @@ react_to_env_A29(Retries, P_latest, LatestUnanimousP, _ReadExtra,
end. end.
react_to_env_A30(Retries, P_latest, LatestUnanimousP, P_current_calc, react_to_env_A30(Retries, P_latest, LatestUnanimousP, P_current_calc,
#ch_mgr{name=MyName, proj=P_current, #ch_mgr{name=MyName, consistency_mode=CMode} = S) ->
consistency_mode=CMode} = S) ->
V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end, V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end,
if V -> io:format(user, "A30: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end, if V -> io:format(user, "A30: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end,
?REACT(a30), ?REACT(a30),
@ -1385,17 +1301,15 @@ react_to_env_A30(Retries, P_latest, LatestUnanimousP, P_current_calc,
P = #projection_v1{down=Down} = P = #projection_v1{down=Down} =
make_none_projection(Epoch + 1, MyName, All_list, make_none_projection(Epoch + 1, MyName, All_list,
Witness_list, MembersDict), Witness_list, MembersDict),
ChainName = P_current#projection_v1.chain_name,
P1 = P#projection_v1{chain_name=ChainName},
P_newprop = if CMode == ap_mode -> P_newprop = if CMode == ap_mode ->
%% Not really none proj: just myself, AP style %% Not really none proj: just myself, AP style
machi_projection:update_checksum( machi_projection:update_checksum(
P1#projection_v1{upi=[MyName], P#projection_v1{upi=[MyName],
down=Down -- [MyName], down=Down -- [MyName],
dbg=[{hosed_list,AllHosed}]}); dbg=[{hosed_list,AllHosed}]});
CMode == cp_mode -> CMode == cp_mode ->
machi_projection:update_checksum( machi_projection:update_checksum(
P1#projection_v1{dbg=[{hosed_list,AllHosed}]}) P#projection_v1{dbg=[{hosed_list,AllHosed}]})
end, end,
react_to_env_A40(Retries, P_newprop, P_latest, LatestUnanimousP, react_to_env_A40(Retries, P_newprop, P_latest, LatestUnanimousP,
P_current_calc, true, S); P_current_calc, true, S);
@ -1468,22 +1382,13 @@ react_to_env_A40(Retries, P_newprop, P_latest, LatestUnanimousP,
%% we have a disagreement. %% we have a disagreement.
not ordsets:is_disjoint(P_latest_s, Down_s) not ordsets:is_disjoint(P_latest_s, Down_s)
end, end,
AmExcludedFromLatestAll_p =
P_latest#projection_v1.epoch_number /= 0
andalso
(not lists:member(MyName, P_latest#projection_v1.all_members)),
?REACT({a40, ?LINE, ?REACT({a40, ?LINE,
[{latest_author, P_latest#projection_v1.author_server}, [{latest_author, P_latest#projection_v1.author_server},
{am_excluded_from_latest_all_p, AmExcludedFromLatestAll_p},
{author_is_down_p, LatestAuthorDownP}, {author_is_down_p, LatestAuthorDownP},
{rank_latest, Rank_latest}, {rank_latest, Rank_latest},
{rank_newprop, Rank_newprop}]}), {rank_newprop, Rank_newprop}]}),
if if
AmExcludedFromLatestAll_p ->
?REACT({a40, ?LINE, [{latest,machi_projection:make_summary(P_latest)}]}),
react_to_env_A50(P_latest, [], S);
AmHosedP -> AmHosedP ->
ExpectedUPI = if CMode == cp_mode -> []; ExpectedUPI = if CMode == cp_mode -> [];
CMode == ap_mode -> [MyName] CMode == ap_mode -> [MyName]
@ -1649,10 +1554,12 @@ react_to_env_A40(Retries, P_newprop, P_latest, LatestUnanimousP,
end, end,
if GoTo50_p -> if GoTo50_p ->
?REACT({a40, ?LINE, []}), ?REACT({a40, ?LINE, []}),
%% io:format(user, "CONFIRM debug question line ~w\n", [?LINE]),
FinalProps = [{throttle_seconds, 0}], FinalProps = [{throttle_seconds, 0}],
react_to_env_A50(P_latest, FinalProps, S); react_to_env_A50(P_latest, FinalProps, S);
true -> true ->
?REACT({a40, ?LINE, []}), ?REACT({a40, ?LINE, []}),
io:format(user, "CONFIRM debug question line ~w\n", [?LINE]),
react_to_env_C300(P_newprop, P_latest, S) react_to_env_C300(P_newprop, P_latest, S)
end end
end. end.
@ -1662,6 +1569,7 @@ react_to_env_A50(P_latest, FinalProps, #ch_mgr{proj=P_current}=S) ->
?REACT({a50, ?LINE, [{current_epoch, P_current#projection_v1.epoch_number}, ?REACT({a50, ?LINE, [{current_epoch, P_current#projection_v1.epoch_number},
{latest_epoch, P_latest#projection_v1.epoch_number}, {latest_epoch, P_latest#projection_v1.epoch_number},
{final_props, FinalProps}]}), {final_props, FinalProps}]}),
%% if S#ch_mgr.name == c -> io:format(user, "A50: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end,
V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end, V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end,
if V -> io:format(user, "A50: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end, if V -> io:format(user, "A50: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end,
{{no_change, FinalProps, P_current#projection_v1.epoch_number}, S}. {{no_change, FinalProps, P_current#projection_v1.epoch_number}, S}.
@ -1915,7 +1823,7 @@ react_to_env_C100_inner(Author_latest, NotSanesDict0, _MyName,
S2 = S#ch_mgr{not_sanes=NotSanesDict, sane_transitions=0}, S2 = S#ch_mgr{not_sanes=NotSanesDict, sane_transitions=0},
case orddict:fetch(Author_latest, NotSanesDict) of case orddict:fetch(Author_latest, NotSanesDict) of
N when N > ?TOO_FREQUENT_BREAKER -> N when N > ?TOO_FREQUENT_BREAKER ->
?V("\n\nYOYO ~w breaking the cycle insane-freq=~w by-author=~w of:\n current: ~w\n new : ~w\n", [_MyName, N, Author_latest, machi_projection:make_summary(S#ch_mgr.proj), machi_projection:make_summary(P_latest)]), %% ?V("\n\nYOYO ~w breaking the cycle of:\n current: ~w\n new : ~w\n", [_MyName, machi_projection:make_summary(S#ch_mgr.proj), machi_projection:make_summary(P_latest)]),
?REACT({c100, ?LINE, [{not_sanes_author_count, N}]}), ?REACT({c100, ?LINE, [{not_sanes_author_count, N}]}),
react_to_env_C103(P_newprop, P_latest, P_current_calc, S2); react_to_env_C103(P_newprop, P_latest, P_current_calc, S2);
N -> N ->
@ -1936,14 +1844,12 @@ react_to_env_C103(#projection_v1{epoch_number=_Epoch_newprop} = _P_newprop,
members_dict=MembersDict} = P_current, members_dict=MembersDict} = P_current,
P_none0 = make_none_projection(Epoch_latest, P_none0 = make_none_projection(Epoch_latest,
MyName, All_list, Witness_list, MembersDict), MyName, All_list, Witness_list, MembersDict),
ChainName = P_current#projection_v1.chain_name, P_none1 = P_none0#projection_v1{dbg=[{none_projection,true}]},
P_none1 = P_none0#projection_v1{chain_name=ChainName,
dbg=[{none_projection,true}]},
P_none = machi_projection:update_checksum(P_none1), P_none = machi_projection:update_checksum(P_none1),
?REACT({c103, ?LINE, ?REACT({c103, ?LINE,
[{current_epoch, P_current#projection_v1.epoch_number}, [{current_epoch, P_current#projection_v1.epoch_number},
{none_projection_epoch, P_none#projection_v1.epoch_number}]}), {none_projection_epoch, P_none#projection_v1.epoch_number}]}),
io:format(user, "SET add_admin_down(~w) at ~w current_epoch ~w none_proj_epoch ~w =====================================\n", [MyName, time(), P_current#projection_v1.epoch_number, P_none#projection_v1.epoch_number]), io:format(user, "SET add_admin_down(~w) at ~w =====================================\n", [MyName, time()]),
machi_fitness:add_admin_down(S#ch_mgr.fitness_svr, MyName, []), machi_fitness:add_admin_down(S#ch_mgr.fitness_svr, MyName, []),
timer:sleep(5*1000), timer:sleep(5*1000),
io:format(user, "SET delete_admin_down(~w) at ~w =====================================\n", [MyName, time()]), io:format(user, "SET delete_admin_down(~w) at ~w =====================================\n", [MyName, time()]),
@ -1980,7 +1886,7 @@ react_to_env_C110(P_latest, #ch_mgr{name=MyName} = S) ->
%% In contrast to the public projection store writes, Humming Consensus %% In contrast to the public projection store writes, Humming Consensus
%% doesn't care about the status of writes to the public store: it's %% doesn't care about the status of writes to the public store: it's
%% always relying only on successful reads of the public store. %% always relying only on successful reads of the public store.
case {?FLU_PC:write_projection(MyStorePid, private, P_latest2,?TO*30+66),Goo} of case {?FLU_PC:write_projection(MyStorePid, private, P_latest2,?TO*30),Goo} of
{ok, Goo} -> {ok, Goo} ->
?REACT({c110, [{write, ok}]}), ?REACT({c110, [{write, ok}]}),
react_to_env_C111(P_latest, P_latest2, Extra1, MyStorePid, S); react_to_env_C111(P_latest, P_latest2, Extra1, MyStorePid, S);
@ -2066,6 +1972,7 @@ react_to_env_C120(P_latest, FinalProps, #ch_mgr{proj_history=H,
?REACT(c120), ?REACT(c120),
H2 = add_and_trunc_history(P_latest, H, ?MAX_HISTORY_LENGTH), H2 = add_and_trunc_history(P_latest, H, ?MAX_HISTORY_LENGTH),
%% diversion_c120_verbose_goop(P_latest, S),
?REACT({c120, [{latest, machi_projection:make_summary(P_latest)}]}), ?REACT({c120, [{latest, machi_projection:make_summary(P_latest)}]}),
S2 = set_proj(S#ch_mgr{proj_history=H2, S2 = set_proj(S#ch_mgr{proj_history=H2,
sane_transitions=Xtns + 1}, P_latest), sane_transitions=Xtns + 1}, P_latest),
@ -2073,21 +1980,20 @@ react_to_env_C120(P_latest, FinalProps, #ch_mgr{proj_history=H,
false -> false ->
S2; S2;
{{_ConfEpoch, _ConfCSum}, ConfTime} -> {{_ConfEpoch, _ConfCSum}, ConfTime} ->
P_latestEpoch = P_latest#projection_v1.epoch_number, io:format(user, "\nCONFIRM debug C120 ~w was annotated ~w\n", [S#ch_mgr.name, P_latest#projection_v1.epoch_number]),
io:format(user, "\nCONFIRM debug C120 ~w was annotated ~w\n", [S#ch_mgr.name, P_latestEpoch]),
S2#ch_mgr{proj_unanimous=ConfTime} S2#ch_mgr{proj_unanimous=ConfTime}
end, end,
V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end, V = case file:read_file("/tmp/moomoo."++atom_to_list(S#ch_mgr.name)) of {ok,_} -> true; _ -> false end,
if V -> io:format("C120: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end, if V -> io:format("C120: ~w: ~p\n", [S#ch_mgr.name, get(react)]); true -> ok end,
{{now_using, FinalProps, P_latest#projection_v1.epoch_number}, S3}. {{now_using, FinalProps, P_latest#projection_v1.epoch_number}, S3}.
add_and_trunc_history(#projection_v1{epoch_number=0}, H, _MaxLength) -> add_and_trunc_history(P_latest, H, MaxLength) ->
H;
add_and_trunc_history(#projection_v1{} = P_latest, H, MaxLength) ->
Latest_U_R = {P_latest#projection_v1.upi, P_latest#projection_v1.repairing}, Latest_U_R = {P_latest#projection_v1.upi, P_latest#projection_v1.repairing},
add_and_trunc_history(Latest_U_R, H, MaxLength); H2 = if P_latest#projection_v1.epoch_number > 0 ->
add_and_trunc_history(Item, H, MaxLength) -> queue:in(Latest_U_R, H);
H2 = queue:in(Item, H), true ->
H
end,
case queue:len(H2) of case queue:len(H2) of
X when X > MaxLength -> X when X > MaxLength ->
{_V, Hxx} = queue:out(H2), {_V, Hxx} = queue:out(H2),
@ -2100,9 +2006,10 @@ react_to_env_C200(Retries, P_latest, S) ->
?REACT(c200), ?REACT(c200),
try try
AuthorProxyPid = proxy_pid(P_latest#projection_v1.author_server, S), AuthorProxyPid = proxy_pid(P_latest#projection_v1.author_server, S),
%% This is just advisory, we don't need a sync reply. ?FLU_PC:kick_projection_reaction(AuthorProxyPid, [])
?FLU_PC:kick_projection_reaction(AuthorProxyPid, [], 100)
catch _Type:_Err -> catch _Type:_Err ->
%% ?V("TODO: tell_author_yo is broken: ~p ~p\n",
%% [_Type, _Err]),
ok ok
end, end,
react_to_env_C210(Retries, S). react_to_env_C210(Retries, S).
@ -2293,7 +2200,6 @@ projection_transition_is_sane_except_si_epoch(
creation_time=CreationTime1, creation_time=CreationTime1,
mode=CMode1, mode=CMode1,
author_server=AuthorServer1, author_server=AuthorServer1,
chain_name=ChainName1,
all_members=All_list1, all_members=All_list1,
witnesses=Witness_list1, witnesses=Witness_list1,
down=Down_list1, down=Down_list1,
@ -2305,7 +2211,6 @@ projection_transition_is_sane_except_si_epoch(
creation_time=CreationTime2, creation_time=CreationTime2,
mode=CMode2, mode=CMode2,
author_server=AuthorServer2, author_server=AuthorServer2,
chain_name=ChainName2,
all_members=All_list2, all_members=All_list2,
witnesses=Witness_list2, witnesses=Witness_list2,
down=Down_list2, down=Down_list2,
@ -2326,8 +2231,7 @@ projection_transition_is_sane_except_si_epoch(
true = is_binary(CSum1) andalso is_binary(CSum2), true = is_binary(CSum1) andalso is_binary(CSum2),
{_,_,_} = CreationTime1, {_,_,_} = CreationTime1,
{_,_,_} = CreationTime2, {_,_,_} = CreationTime2,
true = is_atom(AuthorServer1) andalso is_atom(AuthorServer2), true = is_atom(AuthorServer1) andalso is_atom(AuthorServer2), % todo type may change?
true = is_atom(ChainName1) andalso is_atom(ChainName2),
true = is_list(All_list1) andalso is_list(All_list2), true = is_list(All_list1) andalso is_list(All_list2),
true = is_list(Witness_list1) andalso is_list(Witness_list2), true = is_list(Witness_list1) andalso is_list(Witness_list2),
true = is_list(Down_list1) andalso is_list(Down_list2), true = is_list(Down_list1) andalso is_list(Down_list2),
@ -2339,9 +2243,6 @@ projection_transition_is_sane_except_si_epoch(
%% projection_transition_is_sane_with_si_epoch(). %% projection_transition_is_sane_with_si_epoch().
true = Epoch2 >= Epoch1, true = Epoch2 >= Epoch1,
%% Don't change chain names in the middle of the stream.
true = (ChainName1 == ChainName2),
%% No duplicates %% No duplicates
true = lists:sort(Witness_list2) == lists:usort(Witness_list2), true = lists:sort(Witness_list2) == lists:usort(Witness_list2),
true = lists:sort(Down_list2) == lists:usort(Down_list2), true = lists:sort(Down_list2) == lists:usort(Down_list2),
@ -2349,7 +2250,7 @@ projection_transition_is_sane_except_si_epoch(
true = lists:sort(Repairing_list2) == lists:usort(Repairing_list2), true = lists:sort(Repairing_list2) == lists:usort(Repairing_list2),
%% Disjoint-ness %% Disjoint-ness
%% %% %% %% %% %% %% %% All_list1 = All_list2, % todo will probably change All_list1 = All_list2, % todo will probably change
%% true = lists:sort(All_list2) == lists:sort(Down_list2 ++ UPI_list2 ++ %% true = lists:sort(All_list2) == lists:sort(Down_list2 ++ UPI_list2 ++
%% Repairing_list2), %% Repairing_list2),
[] = [X || X <- Witness_list2, not lists:member(X, All_list2)], [] = [X || X <- Witness_list2, not lists:member(X, All_list2)],
@ -2454,7 +2355,8 @@ poll_private_proj_is_upi_unanimous_sleep(Count, #ch_mgr{runenv=RunEnv}=S) ->
S2 S2
end. end.
poll_private_proj_is_upi_unanimous3(#ch_mgr{name=MyName, proj=P_current} = S) -> poll_private_proj_is_upi_unanimous3(#ch_mgr{name=MyName, proj=P_current,
opts=MgrOpts} = S) ->
UPI = P_current#projection_v1.upi, UPI = P_current#projection_v1.upi,
EpochID = machi_projection:make_epoch_id(P_current), EpochID = machi_projection:make_epoch_id(P_current),
{Rs, S2} = read_latest_projection_call_only2(private, UPI, S), {Rs, S2} = read_latest_projection_call_only2(private, UPI, S),
@ -2487,30 +2389,33 @@ poll_private_proj_is_upi_unanimous3(#ch_mgr{name=MyName, proj=P_current} = S) ->
Annotation = make_annotation(EpochID, Now), Annotation = make_annotation(EpochID, Now),
NewDbg2 = [Annotation|P_currentFull#projection_v1.dbg2], NewDbg2 = [Annotation|P_currentFull#projection_v1.dbg2],
NewProj = P_currentFull#projection_v1{dbg2=NewDbg2}, NewProj = P_currentFull#projection_v1{dbg2=NewDbg2},
ProjStore = get_projection_store_pid_or_regname(S), ProjStore = case get_projection_store_regname(MgrOpts) of
undefined ->
machi_flu_psup:make_proj_supname(MyName);
PStr ->
PStr
end,
#projection_v1{epoch_number=_EpochRep, #projection_v1{epoch_number=_EpochRep,
epoch_csum= <<_CSumRep:4/binary,_/binary>>, epoch_csum= <<_CSumRep:4/binary,_/binary>>,
author_server=AuthRep,
upi=_UPIRep, upi=_UPIRep,
repairing=_RepairingRep} = NewProj, repairing=_RepairingRep} = NewProj,
ok = machi_projection_store:write(ProjStore, private, NewProj), ok = machi_projection_store:write(ProjStore, private, NewProj),
case proplists:get_value(private_write_verbose_confirm, S#ch_mgr.opts) of case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of
true -> true ->
error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w auth ~w by ~w\n", [_EpochRep, _CSumRep, _UPIRep, _RepairingRep, AuthRep, MyName]); io:format(user, "\n~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), _EpochRep, _CSumRep, _UPIRep, _RepairingRep, MyName]);
_ -> _ ->
ok ok
end, end,
%% Unwedge our FLU. %% Unwedge our FLU.
{ok, NotifyPid} = machi_projection_store:get_wedge_notify_pid(ProjStore), {ok, NotifyPid} = machi_projection_store:get_wedge_notify_pid(ProjStore),
_ = machi_flu1:update_wedge_state(NotifyPid, false, EpochID), _ = machi_flu1:update_wedge_state(NotifyPid, false, EpochID),
#ch_mgr{proj_history=H} = S2, S2#ch_mgr{proj_unanimous=Now};
H2 = add_and_trunc_history({confirm, Epoch}, H,
?MAX_HISTORY_LENGTH),
S2#ch_mgr{proj_unanimous=Now, proj_history=H2};
_ -> _ ->
S2 S2
end; end;
_Else -> _Else ->
%% io:format(user, "poll by ~w: want ~W got ~W\n",
%% [MyName, EpochID, 6, _Else, 8]),
S2 S2
end. end.
@ -2546,14 +2451,6 @@ gobble_calls(StaticCall) ->
ok ok
end. end.
gobble_ticks() ->
receive
tick_check_environment ->
gobble_ticks()
after 0 ->
ok
end.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
perhaps_start_repair(#ch_mgr{name=MyName, perhaps_start_repair(#ch_mgr{name=MyName,
@ -2569,13 +2466,12 @@ perhaps_start_repair(#ch_mgr{name=MyName,
%% RepairOpts = [{repair_mode, check}, verbose], %% RepairOpts = [{repair_mode, check}, verbose],
RepairFun = fun() -> do_repair(S, RepairOpts, CMode) end, RepairFun = fun() -> do_repair(S, RepairOpts, CMode) end,
LastUPI = lists:last(UPI), LastUPI = lists:last(UPI),
StabilityTime = application:get_env(machi, stability_time, ?REPAIR_START_STABILITY_TIME),
IgnoreStabilityTime_p = proplists:get_value(ignore_stability_time, IgnoreStabilityTime_p = proplists:get_value(ignore_stability_time,
S#ch_mgr.opts, false), S#ch_mgr.opts, false),
case timer:now_diff(os:timestamp(), Start) div 1000000 of case timer:now_diff(os:timestamp(), Start) div 1000000 of
N when MyName == LastUPI andalso N when MyName == LastUPI andalso
(IgnoreStabilityTime_p orelse (IgnoreStabilityTime_p orelse
N >= StabilityTime) -> N >= ?REPAIR_START_STABILITY_TIME) ->
{WorkerPid, _Ref} = spawn_monitor(RepairFun), {WorkerPid, _Ref} = spawn_monitor(RepairFun),
S#ch_mgr{repair_worker=WorkerPid, S#ch_mgr{repair_worker=WorkerPid,
repair_start=os:timestamp(), repair_start=os:timestamp(),
@ -2616,8 +2512,8 @@ do_repair(#ch_mgr{name=MyName,
T1 = os:timestamp(), T1 = os:timestamp(),
RepairId = proplists:get_value(repair_id, Opts, id1), RepairId = proplists:get_value(repair_id, Opts, id1),
error_logger:info_msg( error_logger:info_msg(
"Repair ~w start: tail ~p of ~p -> ~p, ~p\n", "Repair start: tail ~p of ~p -> ~p, ~p ID ~w\n",
[RepairId, MyName, UPI0, Repairing, RepairMode]), [MyName, UPI0, Repairing, RepairMode, RepairId]),
UPI = UPI0 -- Witness_list, UPI = UPI0 -- Witness_list,
Res = machi_chain_repair:repair(RepairMode, MyName, Repairing, UPI, Res = machi_chain_repair:repair(RepairMode, MyName, Repairing, UPI,
@ -2630,9 +2526,10 @@ do_repair(#ch_mgr{name=MyName,
end, end,
Stats = [{K, ets:lookup_element(ETS, K, 2)} || K <- ETS_T_Keys], Stats = [{K, ets:lookup_element(ETS, K, 2)} || K <- ETS_T_Keys],
error_logger:info_msg( error_logger:info_msg(
"Repair ~w ~s: tail ~p of ~p finished ~p: " "Repair ~s: tail ~p of ~p finished ~p repair ID ~w: "
"~p Stats: ~p\n", "~p\nStats ~p\n",
[RepairId, Summary, MyName, UPI0, RepairMode, Res, Stats]), [Summary, MyName, UPI0, RepairMode, RepairId,
Res, Stats]),
ets:delete(ETS), ets:delete(ETS),
exit({repair_final_status, Res}); exit({repair_final_status, Res});
_ -> _ ->
@ -2869,7 +2766,6 @@ full_majority_size(L) when is_list(L) ->
full_majority_size(length(L)). full_majority_size(length(L)).
make_zerf(#projection_v1{epoch_number=OldEpochNum, make_zerf(#projection_v1{epoch_number=OldEpochNum,
chain_name=ChainName,
all_members=AllMembers, all_members=AllMembers,
members_dict=MembersDict, members_dict=MembersDict,
witnesses=OldWitness_list witnesses=OldWitness_list
@ -2892,8 +2788,7 @@ make_zerf(#projection_v1{epoch_number=OldEpochNum,
MyName, AllMembers, OldWitness_list, MyName, AllMembers, OldWitness_list,
MembersDict), MembersDict),
machi_projection:update_checksum( machi_projection:update_checksum(
P#projection_v1{chain_name=ChainName, P#projection_v1{mode=cp_mode,
mode=cp_mode,
dbg2=[zerf_none,{up,Up},{maj,MajoritySize}]}); dbg2=[zerf_none,{up,Up},{maj,MajoritySize}]});
true -> true ->
make_zerf2(OldEpochNum, Up, MajoritySize, MyName, make_zerf2(OldEpochNum, Up, MajoritySize, MyName,
@ -2908,6 +2803,7 @@ make_zerf2(OldEpochNum, Up, MajoritySize, MyName, AllMembers, OldWitness_list,
Proj2 = Proj#projection_v1{dbg2=[{make_zerf,Epoch}, Proj2 = Proj#projection_v1{dbg2=[{make_zerf,Epoch},
{yyy_hack, get(yyy_hack)}, {yyy_hack, get(yyy_hack)},
{up,Up},{maj,MajoritySize}]}, {up,Up},{maj,MajoritySize}]},
%% io:format(user, "ZERF ~w\n",[machi_projection:make_summary(Proj2)]),
Proj2 Proj2
catch catch
throw:{zerf,no_common} -> throw:{zerf,no_common} ->
@ -2984,36 +2880,41 @@ zerf_find_last_annotated(FLU, MajoritySize, S) ->
[] % lists:flatten() will destroy [] % lists:flatten() will destroy
end. end.
perhaps_verbose_c111(P_latest2, #ch_mgr{name=MyName, opts=Opts}=S) -> perhaps_verbose_c111(P_latest2, S) ->
PrivWriteVerb = proplists:get_value(private_write_verbose, Opts, false), case proplists:get_value(private_write_verbose, S#ch_mgr.opts) of
PrivWriteVerbCONFIRM = proplists:get_value(private_write_verbose_confirm, Opts, false), true ->
if PrivWriteVerb orelse PrivWriteVerbCONFIRM ->
Dbg2X = lists:keydelete(react, 1, Dbg2X = lists:keydelete(react, 1,
P_latest2#projection_v1.dbg2) ++ P_latest2#projection_v1.dbg2) ++
[{is_annotated,is_annotated(P_latest2)}], [{is_annotated,is_annotated(P_latest2)}],
P_latest2x = P_latest2#projection_v1{dbg2=Dbg2X}, % limit verbose len. P_latest2x = P_latest2#projection_v1{dbg2=Dbg2X}, % limit verbose len.
Last2 = get(last_verbose), Last2 = get(last_verbose),
Summ2 = machi_projection:make_summary(P_latest2x), Summ2 = machi_projection:make_summary(P_latest2x),
if PrivWriteVerb, Summ2 /= Last2 -> if P_latest2#projection_v1.upi == [],
put(last_verbose, Summ2),
error_logger:info_msg("~p uses plain: ~w \n",
[MyName, Summ2]);
true ->
ok
end,
if PrivWriteVerbCONFIRM,
P_latest2#projection_v1.upi == [],
(S#ch_mgr.proj)#projection_v1.upi /= [] -> (S#ch_mgr.proj)#projection_v1.upi /= [] ->
<<CSumRep:4/binary,_/binary>> = <<CSumRep:4/binary,_/binary>> =
P_latest2#projection_v1.epoch_csum, P_latest2#projection_v1.epoch_csum,
error_logger:info_msg("CONFIRM epoch ~w ~w upi ~w rep ~w auth ~w by ~w\n", [(S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, P_latest2#projection_v1.author_server, S#ch_mgr.name]); io:format(user, "\n~s CONFIRM epoch ~w ~w upi ~w rep ~w by ~w\n", [machi_util:pretty_time(), (S#ch_mgr.proj)#projection_v1.epoch_number, CSumRep, P_latest2#projection_v1.upi, P_latest2#projection_v1.repairing, S#ch_mgr.name]);
true -> true ->
ok ok
end,
case proplists:get_value(private_write_verbose,
S#ch_mgr.opts) of
true when Summ2 /= Last2 ->
put(last_verbose, Summ2),
?V("\n~s ~p uses plain: ~w \n",
[machi_util:pretty_time(), S#ch_mgr.name, Summ2]);
_ ->
ok
end; end;
true -> _ ->
ok ok
end. end.
calc_consistency_mode(_Witness_list = []) ->
ap_mode;
calc_consistency_mode(_Witness_list) ->
cp_mode.
set_proj(S, Proj) -> set_proj(S, Proj) ->
S#ch_mgr{proj=Proj, proj_unanimous=false}. S#ch_mgr{proj=Proj, proj_unanimous=false}.
@ -3046,10 +2947,3 @@ get_unfit_list(FitnessServer) ->
[] []
end. end.
get_projection_store_pid_or_regname(#ch_mgr{name=MyName, opts=MgrOpts}) ->
case get_projection_store_regname(MgrOpts) of
undefined ->
machi_flu_psup:make_proj_supname(MyName);
PStr ->
PStr
end.

View file

@ -103,10 +103,9 @@ repair(ap_mode=ConsistencyMode, Src, Repairing, UPI, MembersDict, ETS, Opts) ->
Add = fun(Name, Pid) -> put(proxies_dict, orddict:store(Name, Pid, get(proxies_dict))) end, Add = fun(Name, Pid) -> put(proxies_dict, orddict:store(Name, Pid, get(proxies_dict))) end,
OurFLUs = lists:usort([Src] ++ Repairing ++ UPI), % AP assumption! OurFLUs = lists:usort([Src] ++ Repairing ++ UPI), % AP assumption!
RepairMode = proplists:get_value(repair_mode, Opts, repair), RepairMode = proplists:get_value(repair_mode, Opts, repair),
Verb = proplists:get_value(verbose, Opts, false), Verb = proplists:get_value(verbose, Opts, true),
RepairId = proplists:get_value(repair_id, Opts, id1),
Res = try Res = try
_ = [begin [begin
{ok, Proxy} = machi_proxy_flu1_client:start_link(P), {ok, Proxy} = machi_proxy_flu1_client:start_link(P),
Add(FLU, Proxy) Add(FLU, Proxy)
end || {FLU,P} <- MembersDict, lists:member(FLU, OurFLUs)], end || {FLU,P} <- MembersDict, lists:member(FLU, OurFLUs)],
@ -117,39 +116,31 @@ repair(ap_mode=ConsistencyMode, Src, Repairing, UPI, MembersDict, ETS, Opts) ->
get_file_lists(Proxy, FLU, Dict) get_file_lists(Proxy, FLU, Dict)
end, D, ProxiesDict), end, D, ProxiesDict),
MissingFileSummary = make_missing_file_summary(D2, OurFLUs), MissingFileSummary = make_missing_file_summary(D2, OurFLUs),
%% ?VERB("~w MissingFileSummary ~p\n",[RepairId,MissingFileSummary]), ?VERB("MissingFileSummary ~p\n", [MissingFileSummary]),
lager:info("Repair ~w MissingFileSummary ~p\n",
[RepairId, MissingFileSummary]),
[ets:insert(ETS, {{directive_bytes, FLU}, 0}) || FLU <- OurFLUs], [ets:insert(ETS, {{directive_bytes, FLU}, 0}) || FLU <- OurFLUs],
%% Repair files from perspective of Src, i.e. tail(UPI). %% Repair files from perspective of Src, i.e. tail(UPI).
SrcProxy = orddict:fetch(Src, ProxiesDict), SrcProxy = orddict:fetch(Src, ProxiesDict),
{ok, EpochID} = machi_proxy_flu1_client:get_epoch_id( {ok, EpochID} = machi_proxy_flu1_client:get_epoch_id(
SrcProxy, ?SHORT_TIMEOUT), SrcProxy, ?SHORT_TIMEOUT),
%% ?VERB("Make repair directives: "), ?VERB("Make repair directives: "),
Ds = Ds =
[{File, make_repair_directives( [{File, make_repair_directives(
ConsistencyMode, RepairMode, File, Size, EpochID, ConsistencyMode, RepairMode, File, Size, EpochID,
Verb, Verb,
Src, OurFLUs, ProxiesDict, ETS)} || Src, OurFLUs, ProxiesDict, ETS)} ||
{File, {Size, _MissingList}} <- MissingFileSummary], {File, {Size, _MissingList}} <- MissingFileSummary],
%% ?VERB(" done\n"), ?VERB(" done\n"),
lager:info("Repair ~w repair directives finished\n", [RepairId]),
[begin [begin
[{_, Bytes}] = ets:lookup(ETS, {directive_bytes, FLU}), [{_, Bytes}] = ets:lookup(ETS, {directive_bytes, FLU}),
%% ?VERB("Out-of-sync data for FLU ~p: ~s MBytes\n", ?VERB("Out-of-sync data for FLU ~p: ~s MBytes\n",
%% [FLU, mbytes(Bytes)]), [FLU, mbytes(Bytes)])
lager:info("Repair ~w "
"Out-of-sync data for FLU ~p: ~s MBytes\n",
[RepairId, FLU, mbytes(Bytes)]),
ok
end || FLU <- OurFLUs], end || FLU <- OurFLUs],
%% ?VERB("Execute repair directives: "), ?VERB("Execute repair directives: "),
ok = execute_repair_directives(ConsistencyMode, Ds, Src, EpochID, ok = execute_repair_directives(ConsistencyMode, Ds, Src, EpochID,
Verb, OurFLUs, ProxiesDict, ETS), Verb, OurFLUs, ProxiesDict, ETS),
%% ?VERB(" done\n"), ?VERB(" done\n"),
lager:info("Repair ~w repair directives finished\n", [RepairId]),
ok ok
catch catch
What:Why -> What:Why ->
@ -207,7 +198,7 @@ make_repair_compare_fun(SrcFLU) ->
T_a =< T_b T_a =< T_b
end. end.
make_repair_directives(ConsistencyMode, RepairMode, File, Size, _EpochID, make_repair_directives(ConsistencyMode, RepairMode, File, Size, EpochID,
Verb, Src, FLUs0, ProxiesDict, ETS) -> Verb, Src, FLUs0, ProxiesDict, ETS) ->
true = (Size < ?MAX_OFFSET), true = (Size < ?MAX_OFFSET),
FLUs = lists:usort(FLUs0), FLUs = lists:usort(FLUs0),
@ -216,9 +207,11 @@ make_repair_directives(ConsistencyMode, RepairMode, File, Size, _EpochID,
Proxy = orddict:fetch(FLU, ProxiesDict), Proxy = orddict:fetch(FLU, ProxiesDict),
OffSzCs = OffSzCs =
case machi_proxy_flu1_client:checksum_list( case machi_proxy_flu1_client:checksum_list(
Proxy, File, ?LONG_TIMEOUT) of Proxy, EpochID, File, ?LONG_TIMEOUT) of
{ok, InfoBin} -> {ok, InfoBin} ->
machi_csum_table:split_checksum_list_blob_decode(InfoBin); {Info, _} =
machi_flu1:split_checksum_list_blob_decode(InfoBin),
Info;
{error, no_such_file} -> {error, no_such_file} ->
[] []
end, end,
@ -236,6 +229,7 @@ make_repair_directives(ConsistencyMode, RepairMode, File, Size, _EpochID,
make_repair_directives2(C2, ConsistencyMode, RepairMode, make_repair_directives2(C2, ConsistencyMode, RepairMode,
File, Verb, Src, FLUs, ProxiesDict, ETS) -> File, Verb, Src, FLUs, ProxiesDict, ETS) ->
?VERB("."),
make_repair_directives3(C2, ConsistencyMode, RepairMode, make_repair_directives3(C2, ConsistencyMode, RepairMode,
File, Verb, Src, FLUs, ProxiesDict, ETS, []). File, Verb, Src, FLUs, ProxiesDict, ETS, []).
@ -265,18 +259,7 @@ make_repair_directives3([{Offset, Size, CSum, _FLU}=A|Rest0],
%% byte range from all FLUs %% byte range from all FLUs
%% 3b. Log big warning about data loss. %% 3b. Log big warning about data loss.
%% 4. Log any other checksum discrepencies as they are found. %% 4. Log any other checksum discrepencies as they are found.
QQ = [begin exit({todo_repair_sanity_check, ?LINE, File, Offset, As})
Pxy = orddict:fetch(FLU, ProxiesDict),
{ok, EpochID} = machi_proxy_flu1_client:get_epoch_id(
Pxy, ?SHORT_TIMEOUT),
NSInfo = undefined,
XX = machi_proxy_flu1_client:read_chunk(
Pxy, NSInfo, EpochID, File, Offset, Size, undefined,
?SHORT_TIMEOUT),
{FLU, XX}
end || {__Offset, __Size, __CSum, FLU} <- As],
exit({todo_repair_sanity_check, ?LINE, File, Offset, {as,As}, {qq,QQ}})
end, end,
%% List construction guarantees us that there's at least one ?MAX_OFFSET %% List construction guarantees us that there's at least one ?MAX_OFFSET
%% item remains. Sort order + our "taking" of all exact Offset+Size %% item remains. Sort order + our "taking" of all exact Offset+Size
@ -297,16 +280,15 @@ make_repair_directives3([{Offset, Size, CSum, _FLU}=A|Rest0],
true -> Src; true -> Src;
false -> hd(Gots) false -> hd(Gots)
end, end,
_ = [ets:update_counter(ETS, {directive_bytes, FLU_m}, Size) || [ets:update_counter(ETS, {directive_bytes, FLU_m}, Size) ||
FLU_m <- Missing], FLU_m <- Missing],
if Missing == [] -> if Missing == [] ->
noop; noop;
true -> true ->
{copy, A, Missing} {copy, A, Missing}
end end;
%% end; ConsistencyMode == cp_mode ->
%% ConsistencyMode == cp_mode -> exit({todo_cp_mode, ?MODULE, ?LINE})
%% exit({todo_cp_mode, ?MODULE, ?LINE})
end, end,
Acc2 = if Do == noop -> Acc; Acc2 = if Do == noop -> Acc;
true -> [Do|Acc] true -> [Do|Acc]
@ -329,42 +311,38 @@ execute_repair_directives(ap_mode=_ConsistencyMode, Ds, _Src, EpochID, Verb,
{ProxiesDict, EpochID, Verb, ETS}, Ds), {ProxiesDict, EpochID, Verb, ETS}, Ds),
ok. ok.
execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, _Verb, ETS}=Acc) -> execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, Verb, ETS}=Acc) ->
EtsKeys = [{in_files, t_in_files}, {in_chunks, t_in_chunks}, EtsKeys = [{in_files, t_in_files}, {in_chunks, t_in_chunks},
{in_bytes, t_in_bytes}, {out_files, t_out_files}, {in_bytes, t_in_bytes}, {out_files, t_out_files},
{out_chunks, t_out_chunks}, {out_bytes, t_out_bytes}], {out_chunks, t_out_chunks}, {out_bytes, t_out_bytes}],
[ets:insert(ETS, {L_K, 0}) || {L_K, _T_K} <- EtsKeys], [ets:insert(ETS, {L_K, 0}) || {L_K, _T_K} <- EtsKeys],
F = fun({copy, {Offset, Size, TaggedCSum, MySrc}, MyDsts}, Acc2) -> F = fun({copy, {Offset, Size, TaggedCSum, MySrc}, MyDsts}, Acc2) ->
SrcP = orddict:fetch(MySrc, ProxiesDict), SrcP = orddict:fetch(MySrc, ProxiesDict),
%% case ets:lookup_element(ETS, in_chunks, 2) rem 100 of case ets:lookup_element(ETS, in_chunks, 2) rem 100 of
%% 0 -> ?VERB(".2", []); 0 -> ?VERB(".", []);
%% _ -> ok _ -> ok
%% end, end,
_T1 = os:timestamp(), _T1 = os:timestamp(),
%% TODO: support case multiple written or trimmed chunks returned {ok, Chunk} = machi_proxy_flu1_client:read_chunk(
NSInfo = undefined, SrcP, EpochID, File, Offset, Size,
{ok, {[{_, Offset, Chunk, _ReadCSum}|OtherChunks], []=_TrimmedList}} =
machi_proxy_flu1_client:read_chunk(
SrcP, NSInfo, EpochID, File, Offset, Size, undefined,
?SHORT_TIMEOUT), ?SHORT_TIMEOUT),
[] = OtherChunks,
_T2 = os:timestamp(), _T2 = os:timestamp(),
<<_Tag:1/binary, CSum/binary>> = TaggedCSum, <<_Tag:1/binary, CSum/binary>> = TaggedCSum,
case machi_util:checksum_chunk(Chunk) of case machi_util:checksum_chunk(Chunk) of
CSum_now when CSum_now == CSum -> CSum_now when CSum_now == CSum ->
_ = [begin [begin
DstP = orddict:fetch(DstFLU, ProxiesDict), DstP = orddict:fetch(DstFLU, ProxiesDict),
_T3 = os:timestamp(), _T3 = os:timestamp(),
ok = machi_proxy_flu1_client:write_chunk( ok = machi_proxy_flu1_client:write_chunk(
DstP, NSInfo, EpochID, File, Offset, Chunk, TaggedCSum, DstP, EpochID, File, Offset, Chunk,
?SHORT_TIMEOUT), ?SHORT_TIMEOUT),
_T4 = os:timestamp() _T4 = os:timestamp()
end || DstFLU <- MyDsts], end || DstFLU <- MyDsts],
_ = ets:update_counter(ETS, in_chunks, 1), ets:update_counter(ETS, in_chunks, 1),
_ = ets:update_counter(ETS, in_bytes, Size), ets:update_counter(ETS, in_bytes, Size),
N = length(MyDsts), N = length(MyDsts),
_ = ets:update_counter(ETS, out_chunks, N), ets:update_counter(ETS, out_chunks, N),
_ = ets:update_counter(ETS, out_bytes, N*Size), ets:update_counter(ETS, out_bytes, N*Size),
Acc2; Acc2;
CSum_now -> CSum_now ->
error_logger:error_msg( error_logger:error_msg(
@ -383,7 +361,7 @@ execute_repair_directive({File, Cmds}, {ProxiesDict, EpochID, _Verb, ETS}=Acc) -
end, end,
ok = lists:foldl(F, ok, Cmds), ok = lists:foldl(F, ok, Cmds),
%% Copy this file's stats to the total counts. %% Copy this file's stats to the total counts.
_ = [ets:update_counter(ETS, T_K, ets:lookup_element(ETS, L_K, 2)) || [ets:update_counter(ETS, T_K, ets:lookup_element(ETS, L_K, 2)) ||
{L_K, T_K} <- EtsKeys], {L_K, T_K} <- EtsKeys],
Acc. Acc.

View file

@ -1,104 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc cluster_info callback module for machi specific information
%% gathering.
-module(machi_cinfo).
%% cluster_info callbacks
-export([register/0, cluster_info_init/0, cluster_info_generator_funs/0]).
%% for debug in interactive shell
-export([dump/0,
public_projection/1, private_projection/1,
chain_manager/1, fitness/1, flu1/1]).
-include("machi_projection.hrl").
-spec register() -> ok.
register() ->
ok = cluster_info:register_app(?MODULE).
-spec cluster_info_init() -> ok.
cluster_info_init() ->
ok.
-spec cluster_info_generator_funs() -> [{string(), fun((pid()) -> ok)}].
cluster_info_generator_funs() ->
FluNames = [Name || {Name, _, _, _} <- supervisor:which_children(machi_flu_sup)],
lists:flatten([generator_funs_package(Name) || Name <- FluNames]).
generator_funs_package(FluName) ->
[{"Public projection of FLU " ++ atom_to_list(FluName),
cinfo_wrapper(fun public_projection/1, FluName)},
{"Private projection of FLU " ++ atom_to_list(FluName),
cinfo_wrapper(fun private_projection/1, FluName)},
{"Chain manager status of FLU " ++ atom_to_list(FluName),
cinfo_wrapper(fun chain_manager/1, FluName)},
{"Fitness server status of FLU " ++ atom_to_list(FluName),
cinfo_wrapper(fun fitness/1, FluName)},
{"FLU1 status of FLU " ++ atom_to_list(FluName),
cinfo_wrapper(fun flu1/1, FluName)}].
dump() ->
{{Y,M,D},{HH,MM,SS}} = calendar:local_time(),
Filename = lists:flatten(io_lib:format(
"machi-ci-~4..0B~2..0B~2..0B-~2..0B~2..0B~2..0B.html",
[Y,M,D,HH,MM,SS])),
cluster_info:dump_local_node(Filename).
-spec public_projection(atom()) -> [{atom(), term()}].
public_projection(FluName) ->
projection(FluName, public).
-spec private_projection(atom()) -> [{atom(), term()}].
private_projection(FluName) ->
projection(FluName, private).
-spec chain_manager(atom()) -> term().
chain_manager(FluName) ->
Mgr = machi_flu_psup:make_mgr_supname(FluName),
sys:get_status(Mgr).
-spec fitness(atom()) -> term().
fitness(FluName) ->
Fitness = machi_flu_psup:make_fitness_regname(FluName),
sys:get_status(Fitness).
-spec flu1(atom()) -> [{atom(), term()}].
flu1(FluName) ->
State = machi_flu1_append_server:current_state(FluName),
machi_flu1_append_server:format_state(State).
%% Internal functions
projection(FluName, Kind) ->
ProjStore = machi_flu1:make_projection_server_regname(FluName),
{ok, Projection} = machi_projection_store:read_latest_projection(
whereis(ProjStore), Kind),
Fields = record_info(fields, projection_v1),
[_Name | Values] = tuple_to_list(Projection),
lists:zip(Fields, Values).
cinfo_wrapper(Fun, FluName) ->
fun(C) ->
cluster_info:format(C, "~p", [Fun(FluName)])
end.

View file

@ -1,43 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc Configuration consulting utilities. Some conventions:
%% - The function name should match with exact configuration
%% name in `app.config' or `advanced.config' of `machi' section.
%% - The default value of that configuration is expected to be in
%% cuttlefish schema file. Otherwise some macro in headers may
%% be chosen.
%% - Documentation of the configuration is supposed to be written
%% in cuttlefish schema file, rather than @doc section of the function.
%% - spec of the function should be written.
%% - Returning `undefined' is strongly discouraged. Return some default
%% value instead.
%% - `application:get_env/3' is recommended. See `max_file_size/0' for
%% example.
-module(machi_config).
-include("machi.hrl").
-export([max_file_size/0]).
-spec max_file_size() -> pos_integer().
max_file_size() ->
application:get_env(machi, max_file_size, ?DEFAULT_MAX_FILE_SIZE).

File diff suppressed because it is too large Load diff

View file

@ -1,233 +1,108 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2016 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
-module(machi_csum_table). -module(machi_csum_table).
-export([open/2, -export([open/2,
find/3, find/3, write/4, trim/3,
write/6, write/4, trim/5, sync/1,
find_leftneighbor/2, find_rightneighbor/2,
all_trimmed/3, any_trimmed/3,
all_trimmed/2,
calc_unwritten_bytes/1, calc_unwritten_bytes/1,
split_checksum_list_blob_decode/1, close/1, delete/1]).
all/1,
close/1, delete/1, -export([encode_csum_file_entry/3, decode_csum_file_entry/1]).
foldl_chunks/3]).
-include("machi.hrl"). -include("machi.hrl").
-ifdef(TEST). -ifdef(TEST).
-include_lib("eunit/include/eunit.hrl"). -export([split_checksum_list_blob_decode/1]).
-endif. -endif.
-record(machi_csum_table, -record(machi_csum_table,
{file :: string(), {file :: filename:filename(),
table :: eleveldb:db_ref()}). fd :: file:descriptor(),
table :: ets:tid()}).
-type table() :: #machi_csum_table{}. -type table() :: #machi_csum_table{}.
-type byte_sequence() :: { Offset :: non_neg_integer(), -type byte_sequence() :: { Offset :: non_neg_integer(),
Size :: pos_integer()|infinity }. Size :: pos_integer()|infinity }.
-type chunk() :: {Offset :: machi_dt:file_offset(),
Size :: machi_dt:chunk_size(),
machi_dt:chunk_csum() | trimmed | none}.
-export_type([table/0]). -export_type([table/0]).
-spec open(string(), proplists:proplist()) -> -spec open(filename:filename(), proplists:proplists()) ->
{ok, table()} | {error, file:posix()}. {ok, table()} | {error, file:posix()}.
open(CSumFilename, _Opts) -> open(CSumFilename, _Opts) ->
LevelDBOptions = [{create_if_missing, true}, T = ets:new(?MODULE, [private, ordered_set]),
%% Keep this table small so as not to interfere
%% operating system's file cache, which is for
%% Machi's main read efficiency
{total_leveldb_mem_percent, 10}],
{ok, T} = eleveldb:open(CSumFilename, LevelDBOptions),
%% Dummy entry for reserved headers
ok = eleveldb:put(T,
sext:encode({0, ?MINIMUM_OFFSET}),
sext:encode(?CSUM_TAG_NONE_ATOM),
[{sync, true}]),
C0 = #machi_csum_table{ C0 = #machi_csum_table{
file=CSumFilename, file=CSumFilename,
table=T}, table=T},
{ok, C0}. case file:read_file(CSumFilename) of
%% , [read, raw, binary]) of
{ok, Bin} ->
List = case split_checksum_list_blob_decode(Bin) of
{List0, <<>>} ->
List0;
{List0, _Junk} ->
%% Partially written, needs repair TODO
%% [write(CSumFilename, List),
List0
end,
%% assuming all entries are strictly ordered by offset,
%% trim command should always come after checksum entry.
%% *if* by any chance that order cound not be kept, we
%% still can do ordering check and monotonic merge here.
ets:insert(T, List);
{error, enoent} ->
ok;
Error ->
throw(Error)
end,
{ok, Fd} = file:open(CSumFilename, [raw, binary, append]),
{ok, C0#machi_csum_table{fd=Fd}}.
-spec split_checksum_list_blob_decode(binary())-> [chunk()]. -spec find(table(), machi_dt:chunk_pos(), machi_dt:chunk_size()) ->
split_checksum_list_blob_decode(Bin) -> {ok, machi_dt:chunk_csum()} | {error, trimmed|notfound}.
erlang:binary_to_term(Bin).
-define(has_overlap(LeftOffset, LeftSize, RightOffset, RightSize),
((LeftOffset - (RightOffset+RightSize)) * (LeftOffset+LeftSize - RightOffset) < 0)).
-spec find(table(), machi_dt:file_offset(), machi_dt:chunk_size())
-> [chunk()].
find(#machi_csum_table{table=T}, Offset, Size) -> find(#machi_csum_table{table=T}, Offset, Size) ->
{ok, I} = eleveldb:iterator(T, [], keys_only), %% TODO: Check whether all bytes here are written or not
EndKey = sext:encode({Offset+Size, 0}), case ets:lookup(T, Offset) of
StartKey = sext:encode({Offset, Size}), [{Offset, Size, trimmed}] -> {error, trimmed};
{ok, FirstKey} = case eleveldb:iterator_move(I, StartKey) of [{Offset, Size, Checksum}] -> {ok, Checksum};
{error, invalid_iterator} -> [{Offset, _, _}] -> {error, unknown_chunk};
try [] -> {error, unknown_chunk}
%% Assume that the invalid_iterator is because end.
%% we tried to move to the end via StartKey.
%% Instead, move there directly. -spec write(table(), machi_dt:chunk_pos(), machi_dt:chunk_size(),
{ok, _} = eleveldb:iterator_move(I, last), machi_dt:chunk_csum()) ->
{ok, _} = eleveldb:iterator_move(I, prev) ok | {error, used|file:posix()}.
catch write(#machi_csum_table{fd=Fd, table=T}, Offset, Size, CSum) ->
_:_ -> Binary = encode_csum_file_entry_bin(Offset, Size, CSum),
{ok, _} = eleveldb:iterator_move(I, first) case file:write(Fd, Binary) of
end; ok ->
{ok, _} = R0 -> case ets:insert_new(T, {Offset, Size, CSum}) of
case eleveldb:iterator_move(I, prev) of
{error, invalid_iterator} ->
R0;
{ok, _} = R1 ->
R1
end
end,
_ = eleveldb:iterator_close(I),
FoldFun = fun({K, V}, Acc) ->
{TargetOffset, TargetSize} = sext:decode(K),
case ?has_overlap(TargetOffset, TargetSize, Offset, Size) of
true -> true ->
[{TargetOffset, TargetSize, sext:decode(V)}|Acc]; ok;
false -> false ->
Acc {error, written}
end; end;
(_K, Acc) -> Error ->
lager:error("~p wrong option", [_K]), Error
Acc
end,
lists:reverse(eleveldb_fold(T, FirstKey, EndKey, FoldFun, [])).
%% @doc Updates all chunk info, by deleting existing entries if exists
%% and putting new chunk info
-spec write(table(),
machi_dt:file_offset(), machi_dt:chunk_size(),
machi_dt:chunk_csum()|'none'|'trimmed',
undefined|chunk(), undefined|chunk()) ->
ok | {error, term()}.
write(#machi_csum_table{table=T} = CsumT, Offset, Size, CSum,
LeftUpdate, RightUpdate) ->
PutOps =
[{put,
sext:encode({Offset, Size}),
sext:encode(CSum)}]
++ case LeftUpdate of
{LO, LS, LCsum} when LO + LS =:= Offset ->
[{put,
sext:encode({LO, LS}),
sext:encode(LCsum)}];
undefined ->
[]
end
++ case RightUpdate of
{RO, RS, RCsum} when RO =:= Offset + Size ->
[{put,
sext:encode({RO, RS}),
sext:encode(RCsum)}];
undefined ->
[]
end,
Chunks = find(CsumT, Offset, Size),
DeleteOps = lists:map(fun({O, L, _}) ->
{delete, sext:encode({O, L})}
end, Chunks),
%% io:format(user, "PutOps: ~P\n", [PutOps, 20]),
%% io:format(user, "DelOps: ~P\n", [DeleteOps, 20]),
eleveldb:write(T, DeleteOps ++ PutOps, [{sync, true}]).
-spec find_leftneighbor(table(), non_neg_integer()) ->
undefined | chunk().
find_leftneighbor(CsumT, Offset) ->
case find(CsumT, Offset, 1) of
[] -> undefined;
[{Offset, _, _}] -> undefined;
[{LOffset, _, CsumOrTrimmed}] -> {LOffset, Offset - LOffset, CsumOrTrimmed}
end. end.
-spec find_rightneighbor(table(), non_neg_integer()) -> -spec trim(table(), machi_dt:chunk_pos(), machi_dt:chunk_size()) ->
undefined | chunk(). ok | {error, file:posix()}.
find_rightneighbor(CsumT, Offset) -> trim(#machi_csum_table{fd=Fd, table=T}, Offset, Size) ->
case find(CsumT, Offset, 1) of Binary = encode_csum_file_entry_bin(Offset, Size, trimmed),
[] -> undefined; case file:write(Fd, Binary) of
[{Offset, _, _}] -> undefined; ok ->
[{ROffset, RSize, CsumOrTrimmed}] -> true = ets:insert(T, {Offset, Size, trimmed}),
{Offset, ROffset + RSize - Offset, CsumOrTrimmed} ok;
Error ->
Error
end. end.
-spec write(table(), machi_dt:file_offset(), machi_dt:file_size(), -spec sync(table()) -> ok | {error, file:posix()}.
machi_dt:chunk_csum()|none|trimmed) -> sync(#machi_csum_table{fd=Fd}) ->
ok | {error, trimmed|file:posix()}. file:sync(Fd).
write(CsumT, Offset, Size, CSum) ->
write(CsumT, Offset, Size, CSum, undefined, undefined).
trim(CsumT, Offset, Size, LeftUpdate, RightUpdate) ->
write(CsumT, Offset, Size,
trimmed, %% Should this be much smaller like $t or just 't'
LeftUpdate, RightUpdate).
%% @doc returns whether all bytes in a specific window is continously
%% trimmed or not
-spec all_trimmed(table(), non_neg_integer(), non_neg_integer()) -> boolean().
all_trimmed(#machi_csum_table{table=T}, Left, Right) ->
FoldFun = fun({_, _}, false) ->
false;
({K, V}, Pos) when is_integer(Pos) andalso Pos =< Right ->
case {sext:decode(K), sext:decode(V)} of
{{Pos, Size}, trimmed} ->
Pos + Size;
{{Offset, Size}, _}
when Offset + Size =< Left ->
Left;
_Eh ->
false
end
end,
case eleveldb:fold(T, FoldFun, Left, [{verify_checksums, true}]) of
false -> false;
Right -> true;
LastTrimmed when LastTrimmed < Right -> false;
_ -> %% LastTrimmed > Pos0, which is a irregular case but ok
true
end.
%% @doc returns whether all bytes 0-Pos0 is continously trimmed or
%% not, including header.
-spec all_trimmed(table(), non_neg_integer()) -> boolean().
all_trimmed(CsumT, Pos0) ->
all_trimmed(CsumT, 0, Pos0).
-spec any_trimmed(table(),
pos_integer(),
machi_dt:chunk_size()) -> boolean().
any_trimmed(CsumT, Offset, Size) ->
Chunks = find(CsumT, Offset, Size),
lists:any(fun({_, _, State}) -> State =:= trimmed end, Chunks).
-spec calc_unwritten_bytes(table()) -> [byte_sequence()]. -spec calc_unwritten_bytes(table()) -> [byte_sequence()].
calc_unwritten_bytes(#machi_csum_table{table=_} = CsumT) -> calc_unwritten_bytes(#machi_csum_table{table=T}) ->
case lists:sort(all(CsumT)) of case lists:sort(ets:tab2list(T)) of
[] -> [] ->
[{?MINIMUM_OFFSET, infinity}]; [{?MINIMUM_OFFSET, infinity}];
Sorted -> Sorted ->
@ -235,34 +110,94 @@ calc_unwritten_bytes(#machi_csum_table{table=_} = CsumT) ->
build_unwritten_bytes_list(Sorted, LastOffset, []) build_unwritten_bytes_list(Sorted, LastOffset, [])
end. end.
all(CsumT) ->
FoldFun = fun(E, Acc) -> [E|Acc] end,
lists:reverse(foldl_chunks(FoldFun, [], CsumT)).
-spec close(table()) -> ok. -spec close(table()) -> ok.
close(#machi_csum_table{table=T}) -> close(#machi_csum_table{table=T, fd=Fd}) ->
ok = eleveldb:close(T). true = ets:delete(T),
ok = file:close(Fd).
-spec delete(table()) -> ok. -spec delete(table()) -> ok.
delete(#machi_csum_table{table=T, file=F}) -> delete(#machi_csum_table{file=F} = C) ->
catch eleveldb:close(T), catch close(C),
%% TODO change this to directory walk case file:delete(F) of
case os:cmd("rm -rf " ++ F) of ok -> ok;
"" -> ok; {error, enoent} -> ok;
E -> E E -> E
end. end.
-spec foldl_chunks(fun((chunk(), Acc0 :: term()) -> Acc :: term()), %% @doc Encode `Offset + Size + TaggedCSum' into an `iolist()' type for
Acc0 :: term(), table()) -> Acc :: term(). %% internal storage by the FLU.
foldl_chunks(Fun, Acc0, #machi_csum_table{table=T}) ->
FoldFun = fun({K, V}, Acc) -> -spec encode_csum_file_entry(
{Offset, Len} = sext:decode(K), machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:chunk_s()) ->
Fun({Offset, Len, sext:decode(V)}, Acc); iolist().
(_K, Acc) -> encode_csum_file_entry(Offset, Size, TaggedCSum) ->
_ = lager:error("~p: wrong option?", [_K]), Len = 8 + 4 + byte_size(TaggedCSum),
Acc [<<$w, Len:8/unsigned-big, Offset:64/unsigned-big, Size:32/unsigned-big>>,
end, TaggedCSum].
eleveldb:fold(T, FoldFun, Acc0, [{verify_checksums, true}]).
%% @doc Encode `Offset + Size + TaggedCSum' into an `binary()' type for
%% internal storage by the FLU.
-spec encode_csum_file_entry_bin(
machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:chunk_s()) ->
binary().
encode_csum_file_entry_bin(Offset, Size, trimmed) ->
<<$t, Offset:64/unsigned-big, Size:32/unsigned-big>>;
encode_csum_file_entry_bin(Offset, Size, TaggedCSum) ->
Len = 8 + 4 + byte_size(TaggedCSum),
<<$w, Len:8/unsigned-big, Offset:64/unsigned-big, Size:32/unsigned-big,
TaggedCSum/binary>>.
%% @doc Decode a single `binary()' blob into an
%% `{Offset,Size,TaggedCSum}' tuple.
%%
%% The internal encoding (which is currently exposed to the outside world
%% via this function and related ones) is:
%%
%% <ul>
%% <li> 1 byte: record length
%% </li>
%% <li> 8 bytes (unsigned big-endian): byte offset
%% </li>
%% <li> 4 bytes (unsigned big-endian): chunk size
%% </li>
%% <li> all remaining bytes: tagged checksum (1st byte = type tag)
%% </li>
%% </ul>
%%
%% See `machi.hrl' for the tagged checksum types, e.g.,
%% `?CSUM_TAG_NONE'.
-spec decode_csum_file_entry(binary()) ->
error |
{machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:chunk_s()}.
decode_csum_file_entry(<<_:8/unsigned-big, Offset:64/unsigned-big, Size:32/unsigned-big, TaggedCSum/binary>>) ->
{Offset, Size, TaggedCSum};
decode_csum_file_entry(_Else) ->
error.
%% @doc Split a `binary()' blob of `checksum_list' data into a list of
%% `{Offset,Size,TaggedCSum}' tuples.
-spec split_checksum_list_blob_decode(binary()) ->
{list({machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:chunk_s()}),
TrailingJunk::binary()}.
split_checksum_list_blob_decode(Bin) ->
split_checksum_list_blob_decode(Bin, []).
split_checksum_list_blob_decode(<<$w, Len:8/unsigned-big, Part:Len/binary, Rest/binary>>, Acc)->
One = <<Len:8/unsigned-big, Part/binary>>,
case decode_csum_file_entry(One) of
error ->
split_checksum_list_blob_decode(Rest, Acc);
DecOne ->
split_checksum_list_blob_decode(Rest, [DecOne|Acc])
end;
split_checksum_list_blob_decode(<<$t, Offset:64/unsigned-big, Size:32/unsigned-big, Rest/binary>>, Acc) ->
%% trimmed offset
split_checksum_list_blob_decode(Rest, [{Offset, Size, trimmed}|Acc]);
split_checksum_list_blob_decode(Rest, Acc) ->
{lists:reverse(Acc), Rest}.
-spec build_unwritten_bytes_list( CsumData :: [{ Offset :: non_neg_integer(), -spec build_unwritten_bytes_list( CsumData :: [{ Offset :: non_neg_integer(),
Size :: pos_integer(), Size :: pos_integer(),
@ -283,47 +218,3 @@ build_unwritten_bytes_list([{CurrentOffset, CurrentSize, _Csum}|Rest], LastOffse
build_unwritten_bytes_list(Rest, (CurrentOffset+CurrentSize), [{LastOffset, Hole}|Acc]); build_unwritten_bytes_list(Rest, (CurrentOffset+CurrentSize), [{LastOffset, Hole}|Acc]);
build_unwritten_bytes_list([{CO, CS, _Ck}|Rest], _LastOffset, Acc) -> build_unwritten_bytes_list([{CO, CS, _Ck}|Rest], _LastOffset, Acc) ->
build_unwritten_bytes_list(Rest, CO + CS, Acc). build_unwritten_bytes_list(Rest, CO + CS, Acc).
%% @doc If you want to find an overlap among two areas [x, y] and [a,
%% b] where x &lt; y and a &lt; b; if (a-y)*(b-x) &lt; 0 then there's a
%% overlap, else, > 0 then there're no overlap. border condition = 0
%% is not overlap in this offset-size case.
%% inclusion_match_spec(Offset, Size) ->
%% {'>', 0,
%% {'*',
%% {'-', Offset + Size, '$1'},
%% {'-', Offset, {'+', '$1', '$2'}}}}.
-spec eleveldb_fold(eleveldb:db_ref(), binary(), binary(),
fun(({binary(), binary()}, AccType::term()) -> AccType::term()),
AccType0::term()) ->
AccType::term().
eleveldb_fold(Ref, Start, End, FoldFun, InitAcc) ->
{ok, Iterator} = eleveldb:iterator(Ref, []),
try
eleveldb_do_fold(eleveldb:iterator_move(Iterator, Start),
Iterator, End, FoldFun, InitAcc)
catch throw:IteratorClosed ->
{error, IteratorClosed}
after
eleveldb:iterator_close(Iterator)
end.
-spec eleveldb_do_fold({ok, binary(), binary()}|{error, iterator_closed|invalid_iterator}|{ok,binary()},
eleveldb:itr_ref(), binary(),
fun(({binary(), binary()}, AccType::term()) -> AccType::term()),
AccType::term()) ->
AccType::term().
eleveldb_do_fold({ok, Key, Value}, _, End, FoldFun, Acc)
when End < Key ->
FoldFun({Key, Value}, Acc);
eleveldb_do_fold({ok, Key, Value}, Iterator, End, FoldFun, Acc) ->
eleveldb_do_fold(eleveldb:iterator_move(Iterator, next),
Iterator, End, FoldFun,
FoldFun({Key, Value}, Acc));
eleveldb_do_fold({error, iterator_closed}, _, _, _, Acc) ->
%% It's really an error which is not expected
throw({iterator_closed, Acc});
eleveldb_do_fold({error, invalid_iterator}, _, _, _, Acc) ->
%% Probably reached to end
Acc.

View file

@ -20,24 +20,15 @@
-module(machi_dt). -module(machi_dt).
-include("machi.hrl").
-include("machi_projection.hrl"). -include("machi_projection.hrl").
-type append_opts() :: #append_opts{}. -type chunk() :: chunk_bin() | {chunk_csum(), chunk_bin()}.
-type chunk() :: chunk_bin() | iolist(). % client can choose either rep. -type chunk_bin() :: binary() | iolist(). % client can use either
-type chunk_bin() :: binary(). % server returns binary() only. -type chunk_csum() :: binary(). % 1 byte tag, N-1 bytes checksum
-type chunk_csum() :: <<>> | chunk_csum_bin() | {csum_tag(), binary()}. -type chunk_summary() :: {file_offset(), chunk_size(), binary()}.
-type chunk_csum_bin() :: binary(). % 1 byte tag, N-1 bytes checksum -type chunk_s() :: binary(). % server always uses binary()
-type chunk_cstrm() :: 'trimmed' | chunk_csum().
-type chunk_summary() :: {file_offset(), chunk_size(), chunk_bin(), chunk_cstrm()}.
-type chunk_pos() :: {file_offset(), chunk_size(), file_name_s()}. -type chunk_pos() :: {file_offset(), chunk_size(), file_name_s()}.
-type chunk_size() :: non_neg_integer(). -type chunk_size() :: non_neg_integer().
%% Tags that stand for how that checksum was generated. See
%% machi_util:make_tagged_csum/{1,2} for further documentation and
%% implementation.
-type csum_tag() :: none | client_sha | server_sha | server_regen_sha.
-type error_general() :: 'bad_arg' | 'wedged' | 'bad_checksum'. -type error_general() :: 'bad_arg' | 'wedged' | 'bad_checksum'.
-type epoch_csum() :: binary(). -type epoch_csum() :: binary().
-type epoch_num() :: -1 | non_neg_integer(). -type epoch_num() :: -1 | non_neg_integer().
@ -50,26 +41,17 @@
-type file_prefix() :: binary() | list(). -type file_prefix() :: binary() | list().
-type inet_host() :: inet:ip_address() | inet:hostname(). -type inet_host() :: inet:ip_address() | inet:hostname().
-type inet_port() :: inet:port_number(). -type inet_port() :: inet:port_number().
-type locator() :: number().
-type namespace() :: binary().
-type namespace_version() :: non_neg_integer().
-type ns_info() :: #ns_info{}.
-type projection() :: #projection_v1{}. -type projection() :: #projection_v1{}.
-type projection_type() :: 'public' | 'private'. -type projection_type() :: 'public' | 'private'.
-type read_opts() :: #read_opts{}.
-type read_opts_x() :: 'undefined' | 'noopt' | 'none' | #read_opts{}.
-export_type([ -export_type([
append_opts/0,
chunk/0, chunk/0,
chunk_bin/0, chunk_bin/0,
chunk_csum/0, chunk_csum/0,
chunk_csum_bin/0,
chunk_cstrm/0,
chunk_summary/0, chunk_summary/0,
chunk_s/0,
chunk_pos/0, chunk_pos/0,
chunk_size/0, chunk_size/0,
csum_tag/0,
error_general/0, error_general/0,
epoch_csum/0, epoch_csum/0,
epoch_num/0, epoch_num/0,
@ -82,13 +64,7 @@
file_prefix/0, file_prefix/0,
inet_host/0, inet_host/0,
inet_port/0, inet_port/0,
locator/0,
namespace/0,
namespace_version/0,
ns_info/0,
projection/0, projection/0,
projection_type/0, projection_type/0
read_opts/0,
read_opts_x/0
]). ]).

View file

@ -47,18 +47,15 @@
%% public API %% public API
-export([ -export([
start_link/3, start_link/2,
stop/1, stop/1,
sync/1, sync/1,
sync/2, sync/2,
read/3, read/3,
read/4,
write/3, write/3,
write/4, write/4,
trim/4,
append/2, append/2,
append/4, append/4
checksum_list/1
]). ]).
%% gen_server callbacks %% gen_server callbacks
@ -71,7 +68,7 @@
code_change/3 code_change/3
]). ]).
-define(TICK, 5*1000). -define(TICK, 30*1000). %% XXX FIXME Should be something like 5 seconds
-define(TICK_THRESHOLD, 5). %% After this + 1 more quiescent ticks, shutdown -define(TICK_THRESHOLD, 5). %% After this + 1 more quiescent ticks, shutdown
-define(TIMEOUT, 10*1000). -define(TIMEOUT, 10*1000).
-define(TOO_MANY_ERRORS_RATIO, 50). -define(TOO_MANY_ERRORS_RATIO, 50).
@ -79,26 +76,26 @@
-type op_stats() :: { Total :: non_neg_integer(), -type op_stats() :: { Total :: non_neg_integer(),
Errors :: non_neg_integer() }. Errors :: non_neg_integer() }.
-type byte_sequence() :: { Offset :: non_neg_integer(),
Size :: pos_integer()|infinity }.
-record(state, { -record(state, {
fluname :: atom(),
data_dir :: string() | undefined, data_dir :: string() | undefined,
filename :: string() | undefined, filename :: string() | undefined,
data_path :: string() | undefined, data_path :: string() | undefined,
wedged = false :: boolean(), wedged = false :: boolean(),
csum_file :: string()|undefined, csum_file :: string()|undefined,
csum_path :: string()|undefined, csum_path :: string()|undefined,
data_filehandle :: file:io_device(),
csum_table :: machi_csum_table:table(),
eof_position = 0 :: non_neg_integer(), eof_position = 0 :: non_neg_integer(),
max_file_size = ?DEFAULT_MAX_FILE_SIZE :: pos_integer(), unwritten_bytes = [] :: [byte_sequence()],
rollover = false :: boolean(), data_filehandle :: file:filehandle(),
csum_table :: machi_csum_table:table(),
tref :: reference(), %% timer ref tref :: reference(), %% timer ref
ticks = 0 :: non_neg_integer(), %% ticks elapsed with no new operations ticks = 0 :: non_neg_integer(), %% ticks elapsed with no new operations
ops = 0 :: non_neg_integer(), %% sum of all ops ops = 0 :: non_neg_integer(), %% sum of all ops
reads = {0, 0} :: op_stats(), reads = {0, 0} :: op_stats(),
writes = {0, 0} :: op_stats(), writes = {0, 0} :: op_stats(),
appends = {0, 0} :: op_stats(), appends = {0, 0} :: op_stats()
trims = {0, 0} :: op_stats()
}). }).
%% Public API %% Public API
@ -106,9 +103,9 @@
% @doc Start a new instance of the file proxy service. Takes the filename % @doc Start a new instance of the file proxy service. Takes the filename
% and data directory as arguments. This function is typically called by the % and data directory as arguments. This function is typically called by the
% `machi_file_proxy_sup:start_proxy/2' function. % `machi_file_proxy_sup:start_proxy/2' function.
-spec start_link(FluName :: atom(), Filename :: string(), DataDir :: string()) -> any(). -spec start_link(Filename :: string(), DataDir :: string()) -> any().
start_link(FluName, Filename, DataDir) -> start_link(Filename, DataDir) ->
gen_server:start_link(?MODULE, {FluName, Filename, DataDir}, []). gen_server:start_link(?MODULE, {Filename, DataDir}, []).
% @doc Request to stop an instance of the file proxy service. % @doc Request to stop an instance of the file proxy service.
-spec stop(Pid :: pid()) -> ok. -spec stop(Pid :: pid()) -> ok.
@ -132,31 +129,16 @@ sync(_Pid, Type) ->
lager:warning("Bad arg to sync: Type ~p", [Type]), lager:warning("Bad arg to sync: Type ~p", [Type]),
{error, bad_arg}. {error, bad_arg}.
% @doc Read file at offset for length. This returns a sequence of all % @doc Read file at offset for length
% written and trimmed (optional) bytes that overlaps with requested
% offset and length. Borders are not aligned.
-spec read(Pid :: pid(), -spec read(Pid :: pid(),
Offset :: non_neg_integer(), Offset :: non_neg_integer(),
Length :: non_neg_integer()) -> Length :: non_neg_integer()) -> {ok, Data :: binary(), Checksum :: binary()} |
{ok, [{Filename::string(), Offset :: non_neg_integer(),
Data :: binary(), Checksum :: binary()}]} |
{error, Reason :: term()}. {error, Reason :: term()}.
read(Pid, Offset, Length) -> read(Pid, Offset, Length) when is_pid(Pid) andalso is_integer(Offset) andalso Offset >= 0
read(Pid, Offset, Length, #read_opts{}).
-spec read(Pid :: pid(),
Offset :: non_neg_integer(),
Length :: non_neg_integer(),
machi_dt:read_opts_x()) ->
{ok, [{Filename::string(), Offset :: non_neg_integer(),
Data :: binary(), Checksum :: binary()}]} |
{error, Reason :: term()}.
read(Pid, Offset, Length, #read_opts{}=Opts)
when is_pid(Pid) andalso is_integer(Offset) andalso Offset >= 0
andalso is_integer(Length) andalso Length > 0 -> andalso is_integer(Length) andalso Length > 0 ->
gen_server:call(Pid, {read, Offset, Length, Opts}, ?TIMEOUT); gen_server:call(Pid, {read, Offset, Length}, ?TIMEOUT);
read(_Pid, Offset, Length, Opts) -> read(_Pid, Offset, Length) ->
lager:warning("Bad args to read: Offset ~p, Length ~p, Options ~p", [Offset, Length, Opts]), lager:warning("Bad args to read: Offset ~p, Length ~p", [Offset, Length]),
{error, bad_arg}. {error, bad_arg}.
% @doc Write data at offset % @doc Write data at offset
@ -183,12 +165,6 @@ write(_Pid, Offset, ClientMeta, _Data) ->
lager:warning("Bad arg to write: Offset ~p, ClientMeta: ~p", [Offset, ClientMeta]), lager:warning("Bad arg to write: Offset ~p, ClientMeta: ~p", [Offset, ClientMeta]),
{error, bad_arg}. {error, bad_arg}.
trim(Pid, Offset, Size, TriggerGC) when is_pid(Pid),
is_integer(Offset) andalso Offset >= 0,
is_integer(Size) andalso Size > 0,
is_boolean(TriggerGC) ->
gen_server:call(Pid, {trim ,Offset, Size, TriggerGC}, ?TIMEOUT).
% @doc Append data % @doc Append data
-spec append(Pid :: pid(), Data :: binary()) -> {ok, File :: string(), Offset :: non_neg_integer()} -spec append(Pid :: pid(), Data :: binary()) -> {ok, File :: string(), Offset :: non_neg_integer()}
|{error, term()}. |{error, term()}.
@ -212,14 +188,10 @@ append(_Pid, ClientMeta, Extra, _Data) ->
lager:warning("Bad arg to append: ClientMeta ~p, Extra ~p", [ClientMeta, Extra]), lager:warning("Bad arg to append: ClientMeta ~p, Extra ~p", [ClientMeta, Extra]),
{error, bad_arg}. {error, bad_arg}.
-spec checksum_list(pid()) -> {ok, list()}.
checksum_list(Pid) ->
gen_server:call(Pid, {checksum_list}, ?TIMEOUT).
%% gen_server callbacks %% gen_server callbacks
% @private % @private
init({FluName, Filename, DataDir}) -> init({Filename, DataDir}) ->
CsumFile = machi_util:make_checksum_filename(DataDir, Filename), CsumFile = machi_util:make_checksum_filename(DataDir, Filename),
{_, DPath} = machi_util:make_data_filename(DataDir, Filename), {_, DPath} = machi_util:make_data_filename(DataDir, Filename),
ok = filelib:ensure_dir(CsumFile), ok = filelib:ensure_dir(CsumFile),
@ -228,11 +200,8 @@ init({FluName, Filename, DataDir}) ->
UnwrittenBytes = machi_csum_table:calc_unwritten_bytes(CsumTable), UnwrittenBytes = machi_csum_table:calc_unwritten_bytes(CsumTable),
{Eof, infinity} = lists:last(UnwrittenBytes), {Eof, infinity} = lists:last(UnwrittenBytes),
{ok, FHd} = file:open(DPath, [read, write, binary, raw]), {ok, FHd} = file:open(DPath, [read, write, binary, raw]),
%% Reserve for EC and stuff, to prevent eof when read
ok = file:pwrite(FHd, 0, binary:copy(<<"so what?">>, ?MINIMUM_OFFSET div 8)),
Tref = schedule_tick(), Tref = schedule_tick(),
St = #state{ St = #state{
fluname = FluName,
filename = Filename, filename = Filename,
data_dir = DataDir, data_dir = DataDir,
data_path = DPath, data_path = DPath,
@ -240,10 +209,10 @@ init({FluName, Filename, DataDir}) ->
data_filehandle = FHd, data_filehandle = FHd,
csum_table = CsumTable, csum_table = CsumTable,
tref = Tref, tref = Tref,
eof_position = erlang:max(Eof, ?MINIMUM_OFFSET), unwritten_bytes = UnwrittenBytes,
max_file_size = machi_config:max_file_size()}, eof_position = Eof},
lager:debug("Starting file proxy ~p for filename ~p, state = ~p, Eof = ~p", lager:debug("Starting file proxy ~p for filename ~p, state = ~p",
[self(), Filename, St, Eof]), [self(), Filename, St]),
{ok, St}. {ok, St}.
% @private % @private
@ -255,65 +224,67 @@ handle_call({sync, data}, _From, State = #state{ data_filehandle = FHd }) ->
R = file:sync(FHd), R = file:sync(FHd),
{reply, R, State}; {reply, R, State};
handle_call({sync, csum}, _From, State) -> handle_call({sync, csum}, _From, State = #state{ csum_table = T }) ->
%% machi_csum_table always writes in {sync, true} option, so here R = machi_csum_table:sync(T),
%% explicit sync isn't actually needed. {reply, R, State};
{reply, ok, State};
handle_call({sync, all}, _From, State = #state{filename = F, handle_call({sync, all}, _From, State = #state{filename = F,
data_filehandle = FHd, data_filehandle = FHd,
csum_table = _T csum_table = T
}) -> }) ->
Resp = case file:sync(FHd) of R = machi_csum_table:sync(T),
ok -> R1 = file:sync(FHd),
ok; Resp = case {R, R1} of
Error -> {ok, ok} -> ok;
lager:error("Got ~p syncing all files for file ~p", {ok, O1} ->
[Error, F]), lager:error("Got ~p during a data file sync on file ~p", [O1, F]),
Error O1;
{O2, ok} ->
lager:error("Got ~p during a csum file sync on file ~p", [O2, F]),
O2;
{O3, O4} ->
lager:error("Got ~p ~p syncing all files for file ~p", [O3, O4, F]),
{O3, O4}
end, end,
{reply, Resp, State}; {reply, Resp, State};
%%% READS %%% READS
handle_call({read, _Offset, _Length, _}, _From, handle_call({read, _Offset, _Length}, _From,
State = #state{wedged = true, State = #state{wedged = true,
reads = {T, Err} reads = {T, Err}
}) -> }) ->
{reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}}; {reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}};
handle_call({read, Offset, Length, _Opts}, _From, handle_call({read, Offset, Length}, _From,
State = #state{eof_position = Eof, State = #state{eof_position = Eof,
reads = {T, Err} reads = {T, Err}
}) when Offset > Eof -> }) when Offset + Length > Eof ->
%% make sure [Offset, Offset+Length) has an overlap with file range
lager:error("Read request at offset ~p for ~p bytes is past the last write offset of ~p", lager:error("Read request at offset ~p for ~p bytes is past the last write offset of ~p",
[Offset, Length, Eof]), [Offset, Length, Eof]),
{reply, {error, not_written}, State#state{reads = {T + 1, Err + 1}}}; {reply, {error, not_written}, State#state{reads = {T + 1, Err + 1}}};
handle_call({read, Offset, Length, Opts}, _From, handle_call({read, Offset, Length}, _From,
State = #state{filename = F, State = #state{filename = F,
data_filehandle = FH, data_filehandle = FH,
csum_table = CsumTable, csum_table = CsumTable,
unwritten_bytes = U,
reads = {T, Err} reads = {T, Err}
}) -> }) ->
%% TODO: use these options - NoChunk prevents reading from disks
%% NoChecksum doesn't check checksums Checksum = case machi_csum_table:find(CsumTable, Offset, Length) of
#read_opts{no_checksum=NoChecksum, no_chunk=NoChunk, {ok, Checksum0} ->
needs_trimmed=NeedsTrimmed} = Opts, Checksum0;
{Resp, NewErr} = _ ->
case do_read(FH, F, CsumTable, Offset, Length, NoChunk, NoChecksum) of undefined
{ok, {[], []}} ->
{{error, not_written}, Err + 1};
{ok, {Chunks0, Trimmed0}} ->
Chunks = slice_both_side(Chunks0, Offset, Offset+Length),
Trimmed = case NeedsTrimmed of
true -> Trimmed0;
false -> []
end, end,
{{ok, {Chunks, Trimmed}}, Err};
{Resp, NewErr} = case handle_read(FH, F, Checksum, Offset, Length, U) of
{ok, Bytes, Csum} ->
{{ok, Bytes, Csum}, Err};
eof ->
{{error, not_written}, Err + 1};
Error -> Error ->
lager:error("Can't read ~p, ~p at File ~p", [Offset, Length, F]),
{Error, Err + 1} {Error, Err + 1}
end, end,
{reply, Resp, State#state{reads = {T+1, NewErr}}}; {reply, Resp, State#state{reads = {T+1, NewErr}}};
@ -327,75 +298,35 @@ handle_call({write, _Offset, _ClientMeta, _Data}, _From,
{reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}}; {reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}};
handle_call({write, Offset, ClientMeta, Data}, _From, handle_call({write, Offset, ClientMeta, Data}, _From,
State = #state{filename = F, State = #state{unwritten_bytes = U,
filename = F,
writes = {T, Err}, writes = {T, Err},
data_filehandle = FHd, data_filehandle = FHd,
csum_table = CsumTable}) -> csum_table = CsumTable
}) ->
ClientCsumTag = proplists:get_value(client_csum_tag, ClientMeta, ?CSUM_TAG_NONE), ClientCsumTag = proplists:get_value(client_csum_tag, ClientMeta, ?CSUM_TAG_NONE),
ClientCsum = proplists:get_value(client_csum, ClientMeta, <<>>), ClientCsum = proplists:get_value(client_csum, ClientMeta, <<>>),
{Resp, NewErr} = {Resp, NewErr, NewU} =
case check_or_make_tagged_csum(ClientCsumTag, ClientCsum, Data) of case check_or_make_tagged_csum(ClientCsumTag, ClientCsum, Data) of
{error, {bad_csum, Bad}} -> {error, {bad_csum, Bad}} ->
lager:error("Bad checksum on write; client sent ~p, we computed ~p", lager:error("Bad checksum on write; client sent ~p, we computed ~p",
[ClientCsum, Bad]), [ClientCsum, Bad]),
{{error, bad_checksum}, Err + 1}; {{error, bad_checksum}, Err + 1, U};
TaggedCsum -> TaggedCsum ->
case handle_write(FHd, CsumTable, F, TaggedCsum, Offset, Data) of case handle_write(FHd, CsumTable, F, TaggedCsum, Offset, Data, U) of
ok -> {ok, NewU1} ->
{ok, Err}; {ok, Err, NewU1};
Error -> Error ->
{Error, Err + 1} {Error, Err + 1, U}
end end
end, end,
{NewEof, infinity} = lists:last(machi_csum_table:calc_unwritten_bytes(CsumTable)), {NewEof, infinity} = lists:last(NewU),
lager:debug("Wrote ~p bytes at ~p of file ~p, NewEOF = ~p~n",
[iolist_size(Data), Offset, F, NewEof]),
{reply, Resp, State#state{writes = {T+1, NewErr}, {reply, Resp, State#state{writes = {T+1, NewErr},
eof_position = NewEof}}; eof_position = NewEof,
unwritten_bytes = NewU
}};
%%% TRIMS
handle_call({trim, _Offset, _ClientMeta, _Data}, _From,
State = #state{wedged = true,
writes = {T, Err}
}) ->
{reply, {error, wedged}, State#state{writes = {T + 1, Err + 1}}};
handle_call({trim, Offset, Size, _TriggerGC}, _From,
State = #state{data_filehandle=FHd,
ops = Ops,
trims = {T, Err},
csum_table = CsumTable}) ->
case machi_csum_table:all_trimmed(CsumTable, Offset, Offset+Size) of
true ->
NewState = State#state{ops=Ops+1, trims={T, Err+1}},
%% All bytes of that range was already trimmed returns ok
%% here, not {error, trimmed}, which means the whole file
%% was trimmed
maybe_gc(ok, NewState);
false ->
LUpdate = maybe_regenerate_checksum(
FHd,
machi_csum_table:find_leftneighbor(CsumTable, Offset)),
RUpdate = maybe_regenerate_checksum(
FHd,
machi_csum_table:find_rightneighbor(CsumTable, Offset+Size)),
case machi_csum_table:trim(CsumTable, Offset, Size, LUpdate, RUpdate) of
ok ->
{NewEof, infinity} = lists:last(machi_csum_table:calc_unwritten_bytes(CsumTable)),
NewState = State#state{ops=Ops+1,
trims={T+1, Err},
eof_position=NewEof},
maybe_gc(ok, NewState);
Error ->
{reply, Error, State#state{ops=Ops+1, trims={T, Err+1}}}
end
end;
%% APPENDS %% APPENDS
@ -407,6 +338,7 @@ handle_call({append, _ClientMeta, _Extra, _Data}, _From,
handle_call({append, ClientMeta, Extra, Data}, _From, handle_call({append, ClientMeta, Extra, Data}, _From,
State = #state{eof_position = EofP, State = #state{eof_position = EofP,
unwritten_bytes = U,
filename = F, filename = F,
appends = {T, Err}, appends = {T, Err},
data_filehandle = FHd, data_filehandle = FHd,
@ -416,29 +348,25 @@ handle_call({append, ClientMeta, Extra, Data}, _From,
ClientCsumTag = proplists:get_value(client_csum_tag, ClientMeta, ?CSUM_TAG_NONE), ClientCsumTag = proplists:get_value(client_csum_tag, ClientMeta, ?CSUM_TAG_NONE),
ClientCsum = proplists:get_value(client_csum, ClientMeta, <<>>), ClientCsum = proplists:get_value(client_csum, ClientMeta, <<>>),
{Resp, NewErr} = {Resp, NewErr, NewU} =
case check_or_make_tagged_csum(ClientCsumTag, ClientCsum, Data) of case check_or_make_tagged_csum(ClientCsumTag, ClientCsum, Data) of
{error, {bad_csum, Bad}} -> {error, {bad_csum, Bad}} ->
lager:error("Bad checksum; client sent ~p, we computed ~p", lager:error("Bad checksum; client sent ~p, we computed ~p",
[ClientCsum, Bad]), [ClientCsum, Bad]),
{{error, bad_checksum}, Err + 1}; {{error, bad_checksum}, Err + 1, U};
TaggedCsum -> TaggedCsum ->
case handle_write(FHd, CsumTable, F, TaggedCsum, EofP, Data) of case handle_write(FHd, CsumTable, F, TaggedCsum, EofP, Data, U) of
ok -> {ok, NewU1} ->
{{ok, F, EofP}, Err}; {{ok, F, EofP}, Err, NewU1};
Error -> Error ->
{Error, Err + 1} {Error, Err + 1, EofP, U}
end end
end, end,
NewEof = EofP + byte_size(Data) + Extra, {NewEof, infinity} = lists:last(NewU),
lager:debug("appended ~p bytes at ~p file ~p. NewEofP = ~p",
[iolist_size(Data), EofP, F, NewEof]),
{reply, Resp, State#state{appends = {T+1, NewErr}, {reply, Resp, State#state{appends = {T+1, NewErr},
eof_position = NewEof}}; eof_position = NewEof + Extra,
unwritten_bytes = NewU
handle_call({checksum_list}, _FRom, State = #state{csum_table=T}) -> }};
All = machi_csum_table:all(T),
{reply, {ok, All}, State};
handle_call(Req, _From, State) -> handle_call(Req, _From, State) ->
lager:warning("Unknown call: ~p", [Req]), lager:warning("Unknown call: ~p", [Req]),
@ -450,23 +378,10 @@ handle_cast(Cast, State) ->
{noreply, State}. {noreply, State}.
% @private % @private
handle_info(tick, State = #state{fluname = FluName, handle_info(tick, State = #state{eof_position = Eof}) when Eof >= ?MAX_FILE_SIZE ->
filename = F, lager:notice("Eof position ~p >= max file size ~p. Shutting down.",
eof_position = Eof, [Eof, ?MAX_FILE_SIZE]),
max_file_size = MaxFileSize}) when Eof >= MaxFileSize -> {stop, file_rollover, State};
%% Older code halted here with {stop, file_rollover, State}.
%% However, there may be other requests in our mailbox already
%% and/or not yet delivered but in a race with the
%% machi_flu_metadata_mgr. So we close our eleveldb instance (to
%% avoid double-open attempt by a new file proxy proc), tell
%% machi_flu_metadata_mgr that we request a rollover, then stop.
%% terminate() will take care of forwarding messages that are
%% caught in the race.
lager:notice("Eof ~s position ~p >= max file size ~p. Shutting down.",
[F, Eof, MaxFileSize]),
State2 = close_files(State),
machi_flu_metadata_mgr:stop_proxy_pid_rollover(FluName, {file, F}),
{stop, normal, State2#state{rollover = true}};
%% XXX Is this a good idea? Need to think this through a bit. %% XXX Is this a good idea? Need to think this through a bit.
handle_info(tick, State = #state{wedged = true}) -> handle_info(tick, State = #state{wedged = true}) ->
@ -480,7 +395,7 @@ handle_info(tick, State = #state{
writes = {WT, WE}, writes = {WT, WE},
appends = {AT, AE} appends = {AT, AE}
}) when Ops > 100 andalso }) when Ops > 100 andalso
trunc(((RE+WE+AE) / (RT+WT+AT)) * 100) > ?TOO_MANY_ERRORS_RATIO -> trunc(((RE+WE+AE) / RT+WT+AT) * 100) > ?TOO_MANY_ERRORS_RATIO ->
Errors = RE + WE + AE, Errors = RE + WE + AE,
lager:notice("Got ~p errors. Shutting down.", [Errors]), lager:notice("Got ~p errors. Shutting down.", [Errors]),
{stop, too_many_errors, State}; {stop, too_many_errors, State};
@ -539,9 +454,9 @@ handle_info(Req, State) ->
{noreply, State}. {noreply, State}.
% @private % @private
terminate(Reason, State = #state{fluname = FluName, terminate(Reason, #state{filename = F,
filename = F, data_filehandle = FHd,
rollover = Rollover_p, csum_table = T,
reads = {RT, RE}, reads = {RT, RE},
writes = {WT, WE}, writes = {WT, WE},
appends = {AT, AE} appends = {AT, AE}
@ -551,12 +466,10 @@ terminate(Reason, State = #state{fluname = FluName,
lager:info(" Reads: ~p/~p", [RT, RE]), lager:info(" Reads: ~p/~p", [RT, RE]),
lager:info(" Writes: ~p/~p", [WT, WE]), lager:info(" Writes: ~p/~p", [WT, WE]),
lager:info("Appends: ~p/~p", [AT, AE]), lager:info("Appends: ~p/~p", [AT, AE]),
close_files(State), ok = file:sync(FHd),
if Rollover_p -> ok = file:close(FHd),
forward_late_messages(FluName, F, 500); ok = machi_csum_table:sync(T),
true -> ok = machi_csum_table:close(T),
ok
end,
ok. ok.
% @private % @private
@ -569,7 +482,7 @@ code_change(_OldVsn, State, _Extra) ->
schedule_tick() -> schedule_tick() ->
erlang:send_after(?TICK, self(), tick). erlang:send_after(?TICK, self(), tick).
-spec check_or_make_tagged_csum(Type :: non_neg_integer(), -spec check_or_make_tagged_csum(Type :: binary(),
Checksum :: binary(), Checksum :: binary(),
Data :: binary() ) -> binary() | Data :: binary() ) -> binary() |
{error, {bad_csum, Bad :: binary()}}. {error, {bad_csum, Bad :: binary()}}.
@ -586,31 +499,21 @@ check_or_make_tagged_csum(Tag, InCsum, Data) when Tag == ?CSUM_TAG_CLIENT_SHA;
false -> false ->
{error, {bad_csum, Csum}} {error, {bad_csum, Csum}}
end; end;
check_or_make_tagged_csum(?CSUM_TAG_SERVER_REGEN_SHA,
InCsum, Data) ->
Csum = machi_util:checksum_chunk(Data),
case Csum =:= InCsum of
true ->
machi_util:make_tagged_csum(server_regen_sha, Csum);
false ->
{error, {bad_csum, Csum}}
end;
check_or_make_tagged_csum(OtherTag, _ClientCsum, _Data) -> check_or_make_tagged_csum(OtherTag, _ClientCsum, _Data) ->
lager:warning("Unknown checksum tag ~p", [OtherTag]), lager:warning("Unknown checksum tag ~p", [OtherTag]),
{error, bad_checksum}. {error, bad_checksum}.
-spec do_read(FHd :: file:io_device(), -spec handle_read(FHd :: file:filehandle(),
Filename :: string(), Filename :: string(),
CsumTable :: machi_csum_table:table(), TaggedCsum :: undefined|binary(),
Offset :: non_neg_integer(), Offset :: non_neg_integer(),
Size :: non_neg_integer(), Size :: non_neg_integer(),
NoChunk :: boolean(), Unwritten :: [byte_sequence()]
NoChecksum :: boolean() ) -> {ok, Bytes :: binary(), Csum :: binary()} |
) -> {ok, {Chunks :: [{string(), Offset::non_neg_integer(), binary(), Csum :: binary()}], eof |
Trimmed :: [{string(), Offset::non_neg_integer(), Size::non_neg_integer()}]}} |
{error, bad_checksum} | {error, bad_checksum} |
{error, partial_read} | {error, partial_read} |
{error, file:posix()} | {error, not_written} |
{error, Other :: term() }. {error, Other :: term() }.
% @private Attempt a read operation on the given offset and length. % @private Attempt a read operation on the given offset and length.
% <li> % <li>
@ -625,36 +528,22 @@ check_or_make_tagged_csum(OtherTag, _ClientCsum, _Data) ->
% tuple is returned.</ul> % tuple is returned.</ul>
% </li> % </li>
% %
do_read(FHd, Filename, CsumTable, Offset, Size, _, _) -> % On success, `{ok, Bytes, Checksum}' is returned.
%% Note that find/3 only returns overlapping chunks, both borders handle_read(FHd, Filename, undefined, Offset, Size, U) ->
%% are not aligned to original Offset and Size. handle_read(FHd, Filename, machi_util:make_tagged_csum(none), Offset, Size, U);
ChunkCsums = machi_csum_table:find(CsumTable, Offset, Size),
read_all_ranges(FHd, Filename, ChunkCsums, [], []).
-spec read_all_ranges(file:io_device(), string(), handle_read(FHd, Filename, TaggedCsum, Offset, Size, U) ->
[{non_neg_integer(),non_neg_integer(),trimmed|binary()}], case is_byte_range_unwritten(Offset, Size, U) of
Chunks :: [{string(), Offset::non_neg_integer(), binary(), Csum::binary()}], true ->
Trimmed :: [{string(), Offset::non_neg_integer(), Size::non_neg_integer()}]) -> {error, not_written};
{ok, { false ->
Chunks :: [{string(), Offset::non_neg_integer(), binary(), Csum::binary()}], do_read(FHd, Filename, TaggedCsum, Offset, Size)
Trimmed :: [{string(), Offset::non_neg_integer(), Size::non_neg_integer()}]}} | end.
{erorr, term()|partial_read}.
read_all_ranges(_, _, [], ReadChunks, TrimmedChunks) ->
%% TODO: currently returns empty list of trimmed chunks
{ok, {lists:reverse(ReadChunks), lists:reverse(TrimmedChunks)}};
read_all_ranges(FHd, Filename, [{Offset, Size, trimmed}|T], ReadChunks, TrimmedChunks) -> do_read(FHd, Filename, TaggedCsum, Offset, Size) ->
read_all_ranges(FHd, Filename, T, ReadChunks, [{Filename, Offset, Size}|TrimmedChunks]);
read_all_ranges(FHd, Filename, [{Offset, Size, TaggedCsum}|T], ReadChunks, TrimmedChunks) ->
case file:pread(FHd, Offset, Size) of case file:pread(FHd, Offset, Size) of
eof -> eof ->
read_all_ranges(FHd, Filename, T, ReadChunks, TrimmedChunks); eof;
{ok, Bytes} when byte_size(Bytes) == Size, TaggedCsum =:= none ->
read_all_ranges(FHd, Filename, T,
[{Filename, Offset, Bytes,
machi_util:make_tagged_csum(none, <<>>)}|ReadChunks],
TrimmedChunks);
{ok, Bytes} when byte_size(Bytes) == Size -> {ok, Bytes} when byte_size(Bytes) == Size ->
{Tag, Ck} = machi_util:unmake_tagged_csum(TaggedCsum), {Tag, Ck} = machi_util:unmake_tagged_csum(TaggedCsum),
case check_or_make_tagged_csum(Tag, Ck, Bytes) of case check_or_make_tagged_csum(Tag, Ck, Bytes) of
@ -663,15 +552,11 @@ read_all_ranges(FHd, Filename, [{Offset, Size, TaggedCsum}|T], ReadChunks, Trimm
[Bad, Ck]), [Bad, Ck]),
{error, bad_checksum}; {error, bad_checksum};
TaggedCsum -> TaggedCsum ->
read_all_ranges(FHd, Filename, T, {ok, Bytes, TaggedCsum};
[{Filename, Offset, Bytes, TaggedCsum}|ReadChunks],
TrimmedChunks);
OtherCsum when Tag =:= ?CSUM_TAG_NONE ->
%% XXX FIXME: Should we return something other than %% XXX FIXME: Should we return something other than
%% {ok, ....} in this case? %% {ok, ....} in this case?
read_all_ranges(FHd, Filename, T, OtherCsum when Tag =:= ?CSUM_TAG_NONE ->
[{Filename, Offset, Bytes, OtherCsum}|ReadChunks], {ok, Bytes, OtherCsum}
TrimmedChunks)
end; end;
{ok, Partial} -> {ok, Partial} ->
lager:error("In file ~p, offset ~p, wanted to read ~p bytes, but got ~p", lager:error("In file ~p, offset ~p, wanted to read ~p bytes, but got ~p",
@ -683,13 +568,14 @@ read_all_ranges(FHd, Filename, [{Offset, Size, TaggedCsum}|T], ReadChunks, Trimm
{error, Other} {error, Other}
end. end.
-spec handle_write( FHd :: file:io_device(), -spec handle_write( FHd :: file:filehandle(),
CsumTable :: machi_csum_table:table(), CsumTable :: machi_csum_table:table(),
Filename :: string(), Filename :: string(),
TaggedCsum :: binary(), TaggedCsum :: binary(),
Offset :: non_neg_integer(), Offset :: non_neg_integer(),
Data :: binary() Data :: binary(),
) -> ok | Unwritten :: [byte_sequence()]
) -> {ok, NewU :: [byte_sequence()]} |
{error, written} | {error, written} |
{error, Reason :: term()}. {error, Reason :: term()}.
% @private Implements the write and append operation. The first task is to % @private Implements the write and append operation. The first task is to
@ -699,210 +585,153 @@ read_all_ranges(FHd, Filename, [{Offset, Size, TaggedCsum}|T], ReadChunks, Trimm
% checksum and return a "fake" ok response as if the write had been performed % checksum and return a "fake" ok response as if the write had been performed
% when it hasn't really. % when it hasn't really.
% %
% If a write proceeds, the offset, size and checksum are written to a % If a write proceeds, the offset, size and checksum are written to a metadata
% metadata file, and the internal list of unwritten bytes is modified % file, and the internal list of unwritten bytes is modified to reflect the
% to reflect the just-performed write. This is then returned to the % just-performed write. This is then returned to the caller as
% caller as `ok' % `{ok, NewUnwritten}' where NewUnwritten is the revised unwritten byte list.
handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data) -> handle_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Data, U) ->
Size = iolist_size(Data), Size = iolist_size(Data),
case is_byte_range_unwritten(Offset, Size, U) of
false ->
case machi_csum_table:find(CsumTable, Offset, Size) of case machi_csum_table:find(CsumTable, Offset, Size) of
[] -> %% Nothing should be there {error, trimmed} = Error ->
Error;
{error, unknown_chunk} ->
%% The specified has some bytes written, while
%% it's not in the checksum table. Trust U and
%% return as it is used.
{error, written};
{ok, TaggedCsum} ->
case do_read(FHd, Filename, TaggedCsum, Offset, Size) of
eof ->
lager:warning("This should never happen: got eof while reading at offset ~p in file ~p that's supposedly written",
[Offset, Filename]),
{error, server_insanity};
{ok, _, _} ->
{ok, U};
_ ->
{error, written}
end;
{ok, OtherCsum} ->
%% Got a checksum, but it doesn't match the data block's
lager:error("During a potential write at offset ~p in file ~p, a check for unwritten bytes gave us checksum ~p but the data we were trying to trying to write has checksum ~p",
[Offset, Filename, OtherCsum, TaggedCsum]),
{error, written}
end;
true ->
try try
do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data) do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data, U)
catch catch
%% XXX FIXME: be more specific on badmatch that might %%% XXX FIXME: be more specific on badmatch that might
%% occur around line 593 when we write the checksum %%% occur around line 593 when we write the checksum
%% file entry for the data blob we just put on the disk %%% file entry for the data blob we just put on the disk
error:Reason -> error:Reason ->
{error, Reason} {error, Reason}
end;
[{Offset, Size, TaggedCsum}] ->
case do_read(FHd, Filename, CsumTable, Offset, Size, false, false) of
{error, _} = E ->
lager:warning("This should never happen: got ~p while reading"
" at offset ~p in file ~p that's supposedly written",
[E, Offset, Filename]),
{error, server_insanity};
{ok, {[{_, Offset, Data, TaggedCsum}], _}} ->
%% TODO: what if different checksum got from do_read()?
ok;
{ok, _Other} ->
%% TODO: leave some debug/warning message here?
{error, written}
end;
[{Offset, Size, OtherCsum}] ->
%% Got a checksum, but it doesn't match the data block's
lager:error("During a potential write at offset ~p in file ~p,"
" a check for unwritten bytes gave us checksum ~p"
" but the data we were trying to write has checksum ~p",
[Offset, Filename, OtherCsum, TaggedCsum]),
{error, written};
_Chunks ->
%% TODO: Do we try to read all continuous chunks to see
%% wether its total checksum matches client-provided checksum?
case machi_csum_table:any_trimmed(CsumTable, Offset, Size) of
true ->
%% More than a byte is trimmed, besides, do we
%% have to return exact written bytes? No. Clients
%% must issue read_chunk() with needs_trimmed
%% option as true
{error, trimmed};
false ->
%% No byte is trimmed, but at least one byte is written
{error, written}
end end
end. end.
% @private Implements the disk writes for both the write and append % @private Implements the disk writes for both the write and append
% operation. % operation.
-spec do_write( FHd :: file:io_device(), -spec do_write( FHd :: file:descriptor(),
CsumTable :: machi_csum_table:table(), CsumTable :: machi_csum_table:table(),
Filename :: string(), Filename :: string(),
TaggedCsum :: binary(), TaggedCsum :: binary(),
Offset :: non_neg_integer(), Offset :: non_neg_integer(),
Size :: non_neg_integer(), Size :: non_neg_integer(),
Data :: binary() Data :: binary(),
) -> ok | {error, Reason :: term()}. Unwritten :: [byte_sequence()]
do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data) -> ) -> {ok, NewUnwritten :: [byte_sequence()]} |
{error, Reason :: term()}.
do_write(FHd, CsumTable, Filename, TaggedCsum, Offset, Size, Data, U) ->
case file:pwrite(FHd, Offset, Data) of case file:pwrite(FHd, Offset, Data) of
ok -> ok ->
lager:debug("Successful write in file ~p at offset ~p, length ~p", lager:debug("Successful write in file ~p at offset ~p, length ~p",
[Filename, Offset, Size]), [Filename, Offset, Size]),
ok = machi_csum_table:write(CsumTable, Offset, Size, TaggedCsum),
%% Overlapping chunk; calculate checksum NewU = update_unwritten(Offset, Size, U),
%% read {LOffset, Offset - LOffset} and make csum lager:debug("Successful write to checksum file for ~p; unwritten bytes are now: ~p",
%% as server_sha [Filename, NewU]),
LUpdate = maybe_regenerate_checksum( {ok, NewU};
FHd,
machi_csum_table:find_leftneighbor(CsumTable, Offset)),
RUpdate = maybe_regenerate_checksum(
FHd,
machi_csum_table:find_rightneighbor(CsumTable, Offset+Size)),
ok = machi_csum_table:write(CsumTable, Offset, Size,
TaggedCsum, LUpdate, RUpdate),
lager:debug("Successful write to checksum file for ~p",
[Filename]),
ok;
Other -> Other ->
lager:error("Got ~p during write to file ~p at offset ~p, length ~p", lager:error("Got ~p during write to file ~p at offset ~p, length ~p",
[Other, Filename, Offset, Size]), [Other, Filename, Offset, Size]),
{error, Other} {error, Other}
end. end.
%% @doc Trim both right and left border of chunks to fit in to given -spec is_byte_range_unwritten( Offset :: non_neg_integer(),
%% range [LeftPos, RightPos]. TODO: write unit tests for this function. Size :: pos_integer(),
Unwritten :: [byte_sequence()] ) -> boolean().
%% Dialyzer 'can never match': slice_both_side([], _, _) -> % @private Given an offset and a size, return `true' if a byte range has
%% []; % <b>not</b> been written. Otherwise, return `false'.
slice_both_side([], _, _) -> is_byte_range_unwritten(Offset, Size, Unwritten) ->
[]; case Unwritten of
slice_both_side([{F, Offset, Chunk, _Csum}|L], LeftPos, RightPos) [] ->
when Offset < LeftPos andalso LeftPos < RightPos -> lager:critical("Unwritten byte list has 0 entries! This should never happen."),
TrashLen = (LeftPos - Offset), false;
<<_:TrashLen/binary, NewChunk/binary>> = Chunk, [{Eof, infinity}] ->
NewChecksum = machi_util:make_tagged_csum(?CSUM_TAG_SERVER_REGEN_SHA_ATOM, Chunk), Offset >= Eof;
NewH = {F, LeftPos, NewChunk, NewChecksum},
slice_both_side([NewH|L], LeftPos, RightPos);
slice_both_side(Chunks, LeftPos, RightPos) when LeftPos =< RightPos ->
%% TODO: optimize
[{F, Offset, Chunk, _Csum}|L] = lists:reverse(Chunks),
Size = iolist_size(Chunk),
if RightPos < Offset + Size ->
NewSize = RightPos - Offset,
<<NewChunk:NewSize/binary, _/binary>> = Chunk,
NewChecksum = machi_util:make_tagged_csum(?CSUM_TAG_SERVER_REGEN_SHA_ATOM, Chunk),
lists:reverse([{F, Offset, NewChunk, NewChecksum}|L]);
true ->
Chunks
end.
maybe_regenerate_checksum(_, undefined) ->
undefined;
maybe_regenerate_checksum(_, {_, _, trimmed} = Change) ->
Change;
maybe_regenerate_checksum(FHd, {Offset, Size, _Csum}) ->
case file:pread(FHd, Offset, Size) of
eof ->
error({eof, Offset, Size});
{ok, Bytes} when byte_size(Bytes) =:= Size ->
TaggedCsum = machi_util:make_tagged_csum(server_regen_sha,
machi_util:checksum_chunk(Bytes)),
{Offset, Size, TaggedCsum};
Error ->
throw(Error)
end.
%% GC: make sure unwritten bytes = [{Eof, infinity}] and Eof is > max
%% file size walk through the checksum table and make sure all chunks
%% trimmed Then unlink the file
-spec maybe_gc(term(), #state{}) ->
{reply, term(), #state{}} | {stop, normal, term(), #state{}}.
maybe_gc(Reply, S = #state{eof_position = Eof,
max_file_size = MaxFileSize}) when Eof < MaxFileSize ->
lager:debug("The file is still small; not trying GC (Eof, MaxFileSize) = (~p, ~p)~n",
[Eof, MaxFileSize]),
{reply, Reply, S};
maybe_gc(Reply, S = #state{fluname=FluName,
data_filehandle = FHd,
data_dir = DataDir,
filename = Filename,
eof_position = Eof,
csum_table=CsumTable}) ->
case machi_csum_table:all_trimmed(CsumTable, ?MINIMUM_OFFSET, Eof) of
true ->
lager:debug("GC? Let's do it: ~p.~n", [Filename]),
%% Before unlinking a file, it should inform
%% machi_flu_filename_mgr that this file is
%% deleted and mark it as "trimmed" to avoid
%% filename reuse and resurrection. Maybe garbage
%% will remain if a process crashed but it also
%% should be recovered at filename_mgr startup.
%% Also, this should be informed *before* file proxy
%% deletes files.
ok = machi_flu_metadata_mgr:trim_file(FluName, {file, Filename}),
ok = file:close(FHd),
{_, DPath} = machi_util:make_data_filename(DataDir, Filename),
ok = file:delete(DPath),
machi_csum_table:delete(CsumTable),
{stop, normal, Reply,
S#state{data_filehandle=undefined,
csum_table=undefined}};
false ->
{reply, Reply, S}
end.
close_files(State = #state{data_filehandle = FHd,
csum_table = T}) ->
case FHd of
undefined ->
noop; %% file deleted
_ -> _ ->
ok = file:sync(FHd), case lookup_unwritten(Offset, Size, Unwritten) of
ok = file:close(FHd) {ok, _} -> true;
end, not_found -> false
case T of end
undefined ->
noop; %% file deleted
_ ->
ok = machi_csum_table:close(T)
end,
State#state{data_filehandle = undefined, csum_table = undefined}.
forward_late_messages(FluName, F, Timeout) ->
receive
M ->
case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, F}) of
{ok, Pid} ->
Pid ! M;
{error, trimmed} ->
lager:error("TODO: FLU ~p file ~p reports trimmed status "
"when forwarding ~P\n",
[FluName, F, M, 20])
end,
forward_late_messages(FluName, F, Timeout)
after Timeout ->
ok
end. end.
-spec lookup_unwritten( Offset :: non_neg_integer(),
Size :: pos_integer(),
Unwritten :: [byte_sequence()]
) -> {ok, byte_sequence()} | not_found.
% @private Given an offset and a size, scan the list of unwritten bytes and
% look for a "hole" where a write might be allowed if any exist. If a
% suitable byte sequence is found, the function returns a tuple of {ok,
% {Position, Space}} is returned. `not_found' is returned if no suitable
% space is located.
lookup_unwritten(_Offset, _Size, []) ->
not_found;
lookup_unwritten(Offset, _Size, [H={Pos, infinity}|_Rest]) when Offset >= Pos ->
{ok, H};
lookup_unwritten(Offset, Size, [H={Pos, Space}|_Rest])
when Offset >= Pos andalso Offset < Pos+Space
andalso Size =< (Space - (Offset - Pos)) ->
{ok, H};
lookup_unwritten(Offset, Size, [_H|Rest]) ->
%% These are not the droids you're looking for.
lookup_unwritten(Offset, Size, Rest).
%%% if the pos is greater than offset + size then we're done. End early.
-spec update_unwritten( Offset :: non_neg_integer(),
Size :: pos_integer(),
Unwritten :: [byte_sequence()] ) -> NewUnwritten :: [byte_sequence()].
% @private Given an offset, a size and the unwritten byte list, return an updated
% and sorted unwritten byte list accounting for any completed write operation.
update_unwritten(Offset, Size, Unwritten) ->
case lookup_unwritten(Offset, Size, Unwritten) of
not_found ->
lager:error("Couldn't find byte sequence tuple for a write which earlier found a valid spot to write!!! This should never happen!"),
Unwritten;
{ok, {Offset, Size}} ->
%% we neatly filled in our hole...
lists:keydelete(Offset, 1, Unwritten);
{ok, S={Pos, _}} ->
lists:sort(lists:keydelete(Pos, 1, Unwritten) ++
update_byte_range(Offset, Size, S))
end.
-spec update_byte_range( Offset :: non_neg_integer(),
Size :: pos_integer(),
Sequence :: byte_sequence() ) -> Updates :: [byte_sequence()].
% @private Given an offset and size and a byte sequence tuple where a
% write took place, return a list of updates to the list of unwritten bytes
% accounting for the space occupied by the just completed write.
update_byte_range(Offset, Size, {Eof, infinity}) when Offset == Eof ->
[{Offset + Size, infinity}];
update_byte_range(Offset, Size, {Eof, infinity}) when Offset > Eof ->
[{Eof, (Offset - Eof)}, {Offset+Size, infinity}];
update_byte_range(Offset, Size, {Pos, Space}) when Offset == Pos andalso Size < Space ->
[{Offset + Size, Space - Size}];
update_byte_range(Offset, Size, {Pos, Space}) when Offset > Pos ->
[{Pos, Offset - Pos}, {Offset+Size, ( (Pos+Space) - (Offset + Size) )}].

View file

@ -44,8 +44,7 @@ start_link(FluName) ->
supervisor:start_link({local, make_proxy_name(FluName)}, ?MODULE, []). supervisor:start_link({local, make_proxy_name(FluName)}, ?MODULE, []).
start_proxy(FluName, DataDir, Filename) -> start_proxy(FluName, DataDir, Filename) ->
supervisor:start_child(make_proxy_name(FluName), supervisor:start_child(make_proxy_name(FluName), [Filename, DataDir]).
[FluName, Filename, DataDir]).
init([]) -> init([]) ->
SupFlags = {simple_one_for_one, 1000, 10}, SupFlags = {simple_one_for_one, 1000, 10},

View file

@ -39,12 +39,11 @@
get_unfit_list/1, update_local_down_list/3, get_unfit_list/1, update_local_down_list/3,
add_admin_down/3, delete_admin_down/2, add_admin_down/3, delete_admin_down/2,
send_fitness_update_spam/3, send_fitness_update_spam/3,
send_spam_to_everyone/1, send_spam_to_everyone/1]).
trigger_early_adjustment/2]).
%% gen_server callbacks %% gen_server callbacks
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, -export([init/1, handle_call/3, handle_cast/2, handle_info/2,
terminate/2, code_change/3, format_status/2]). terminate/2, code_change/3]).
-record(state, { -record(state, {
my_flu_name :: atom() | binary(), my_flu_name :: atom() | binary(),
@ -82,19 +81,12 @@ send_fitness_update_spam(Pid, FromName, Dict) ->
send_spam_to_everyone(Pid) -> send_spam_to_everyone(Pid) ->
gen_server:call(Pid, {send_spam_to_everyone}, infinity). gen_server:call(Pid, {send_spam_to_everyone}, infinity).
%% @doc For testing purposes, we don't want a test to wait for
%% wall-clock time to elapse before the fitness server makes a
%% down->up status decision.
trigger_early_adjustment(Pid, FLU) ->
Pid ! {adjust_down_list, FLU}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
init([{MyFluName}|Args]) -> init([{MyFluName}|Args]) ->
RegName = machi_flu_psup:make_fitness_regname(MyFluName), RegName = machi_flu_psup:make_fitness_regname(MyFluName),
register(RegName, self()), register(RegName, self()),
{ok, _} = timer:send_interval(5000, debug_dump), timer:send_interval(5000, debug_dump),
UseSimulatorP = proplists:get_value(use_partition_simulator, Args, false), UseSimulatorP = proplists:get_value(use_partition_simulator, Args, false),
{ok, #state{my_flu_name=MyFluName, reg_name=RegName, {ok, #state{my_flu_name=MyFluName, reg_name=RegName,
partition_simulator_p=UseSimulatorP, partition_simulator_p=UseSimulatorP,
@ -108,29 +100,24 @@ handle_call({update_local_down_list, Down, MembersDict}, _From,
#state{my_flu_name=MyFluName, pending_map=OldMap, #state{my_flu_name=MyFluName, pending_map=OldMap,
local_down=OldDown, members_dict=OldMembersDict, local_down=OldDown, members_dict=OldMembersDict,
admin_down=AdminDown}=S) -> admin_down=AdminDown}=S) ->
verbose("FITNESS: ~w has down suspect ~w\n", [MyFluName, Down]),
NewMap = store_in_map(OldMap, MyFluName, erlang:now(), Down, NewMap = store_in_map(OldMap, MyFluName, erlang:now(), Down,
AdminDown, [props_yo]), AdminDown, [props_yo]),
S2 = if Down == OldDown, MembersDict == OldMembersDict -> S2 = if Down == OldDown, MembersDict == OldMembersDict ->
%% Do nothing only if both are equal. If members_dict is %% Do nothing only if both are equal. If members_dict is
%% changing, that's sufficient reason to spam. %% changing, that's sufficient reason to spam.
S; ok;
true -> true ->
do_map_change(NewMap, [MyFluName], MembersDict, S) do_map_change(NewMap, [MyFluName], MembersDict, S)
end, end,
{reply, ok, S2#state{local_down=Down}}; {reply, ok, S2#state{local_down=Down}};
handle_call({add_admin_down, DownFLU, DownProps}, _From, handle_call({add_admin_down, DownFLU, DownProps}, _From,
#state{my_flu_name=MyFluName, #state{local_down=OldDown, admin_down=AdminDown}=S) ->
local_down=OldDown, admin_down=AdminDown}=S) ->
verbose("FITNESS: ~w add admin down ~w\n", [MyFluName, DownFLU]),
NewAdminDown = [{DownFLU,DownProps}|lists:keydelete(DownFLU, 1, AdminDown)], NewAdminDown = [{DownFLU,DownProps}|lists:keydelete(DownFLU, 1, AdminDown)],
S3 = finish_admin_down(erlang:now(), OldDown, NewAdminDown, S3 = finish_admin_down(erlang:now(), OldDown, NewAdminDown,
[props_yo], S), [props_yo], S),
{reply, ok, S3}; {reply, ok, S3};
handle_call({delete_admin_down, DownFLU}, _From, handle_call({delete_admin_down, DownFLU}, _From,
#state{my_flu_name=MyFluName, #state{local_down=OldDown, admin_down=AdminDown}=S) ->
local_down=OldDown, admin_down=AdminDown}=S) ->
verbose("FITNESS: ~w delete admin down ~w\n", [MyFluName, DownFLU]),
NewAdminDown = lists:keydelete(DownFLU, 1, AdminDown), NewAdminDown = lists:keydelete(DownFLU, 1, AdminDown),
S3 = finish_admin_down(erlang:now(), OldDown, NewAdminDown, S3 = finish_admin_down(erlang:now(), OldDown, NewAdminDown,
[props_yo], S), [props_yo], S),
@ -148,8 +135,7 @@ handle_call(_Request, _From, S) ->
handle_cast(_Msg, S) -> handle_cast(_Msg, S) ->
{noreply, S}. {noreply, S}.
handle_info({adjust_down_list, FLU}, #state{my_flu_name=MyFluName, handle_info({adjust_down_list, FLU}, #state{active_unfit=ActiveUnfit}=S) ->
active_unfit=ActiveUnfit}=S) ->
NewUnfit = make_unfit_list(S), NewUnfit = make_unfit_list(S),
Added_to_new = NewUnfit -- ActiveUnfit, Added_to_new = NewUnfit -- ActiveUnfit,
Dropped_from_new = ActiveUnfit -- NewUnfit, Dropped_from_new = ActiveUnfit -- NewUnfit,
@ -185,16 +171,14 @@ handle_info({adjust_down_list, FLU}, #state{my_flu_name=MyFluName,
%% hiding where we need this extra round of messages to *remove* a %% hiding where we need this extra round of messages to *remove* a
%% FLU from the active_unfit list? %% FLU from the active_unfit list?
_ = schedule_adjust_messages(lists:usort(Added_to_new ++ Dropped_from_new)), schedule_adjust_messages(lists:usort(Added_to_new ++ Dropped_from_new)),
case {lists:member(FLU,Added_to_new), lists:member(FLU,Dropped_from_new)} of case {lists:member(FLU,Added_to_new), lists:member(FLU,Dropped_from_new)} of
{true, true} -> {true, true} ->
error({bad, ?MODULE, ?LINE, FLU, ActiveUnfit, NewUnfit}); error({bad, ?MODULE, ?LINE, FLU, ActiveUnfit, NewUnfit});
{true, false} -> {true, false} ->
NewActive = wrap_active(MyFluName,lists:usort(ActiveUnfit++[FLU])), {noreply, S#state{active_unfit=lists:usort(ActiveUnfit ++ [FLU])}};
{noreply, S#state{active_unfit=NewActive}};
{false, true} -> {false, true} ->
NewActive = wrap_active(MyFluName,ActiveUnfit--[FLU]), {noreply, S#state{active_unfit=ActiveUnfit -- [FLU]}};
{noreply, S#state{active_unfit=NewActive}};
{false, false} -> {false, false} ->
{noreply, S} {noreply, S}
end; end;
@ -209,11 +193,6 @@ handle_info(_Info, S) ->
terminate(_Reason, _S) -> terminate(_Reason, _S) ->
ok. ok.
format_status(_Opt, [_PDict, Status]) ->
Fields = record_info(fields, state),
[_Name | Values] = tuple_to_list(Status),
lists:zip(Fields, Values).
code_change(_OldVsn, S, _Extra) -> code_change(_OldVsn, S, _Extra) ->
{ok, S}. {ok, S}.
@ -311,8 +290,8 @@ proxy_pid(Name, #state{proxies_dict=ProxiesDict}) ->
calc_unfit(All_list, HosedAnnotations) -> calc_unfit(All_list, HosedAnnotations) ->
G = digraph:new(), G = digraph:new(),
_ = [digraph:add_vertex(G, V) || V <- All_list], [digraph:add_vertex(G, V) || V <- All_list],
_ = [digraph:add_edge(G, V1, V2) || {V1, problem_with, V2} <- HosedAnnotations], [digraph:add_edge(G, V1, V2) || {V1, problem_with, V2} <- HosedAnnotations],
calc_unfit2(lists:sort(digraph:vertices(G)), G). calc_unfit2(lists:sort(digraph:vertices(G)), G).
calc_unfit2([], G) -> calc_unfit2([], G) ->
@ -366,7 +345,7 @@ do_map_change(NewMap, DontSendList, MembersDict,
#state{my_flu_name=_MyFluName, pending_map=OldMap}=S) -> #state{my_flu_name=_MyFluName, pending_map=OldMap}=S) ->
send_spam(NewMap, DontSendList, MembersDict, S), send_spam(NewMap, DontSendList, MembersDict, S),
ChangedServers = find_changed_servers(OldMap, NewMap, _MyFluName), ChangedServers = find_changed_servers(OldMap, NewMap, _MyFluName),
_ = schedule_adjust_messages(ChangedServers), schedule_adjust_messages(ChangedServers),
%% _OldMapV = map_value(OldMap), %% _OldMapV = map_value(OldMap),
%% _MapV = map_value(NewMap), %% _MapV = map_value(NewMap),
%% io:format(user, "TODO: ~w async tick trigger/scheduling... ~w for:\n" %% io:format(user, "TODO: ~w async tick trigger/scheduling... ~w for:\n"
@ -432,18 +411,6 @@ map_value(Map) ->
map_merge(Map1, Map2) -> map_merge(Map1, Map2) ->
?MAP:merge(Map1, Map2). ?MAP:merge(Map1, Map2).
wrap_active(MyFluName, L) ->
verbose("FITNESS: ~w has new down list ~w\n", [MyFluName, L]),
L.
verbose(Fmt, Args) ->
case application:get_env(machi, fitness_verbose) of
{ok, true} ->
error_logger:info_msg(Fmt, Args);
_ ->
ok
end.
-ifdef(TEST). -ifdef(TEST).
dt_understanding_test() -> dt_understanding_test() ->

View file

@ -21,9 +21,7 @@
%% @doc The Machi FLU file server + file location sequencer. %% @doc The Machi FLU file server + file location sequencer.
%% %%
%% This module implements only the Machi FLU file server and its %% This module implements only the Machi FLU file server and its
%% implicit sequencer together with listener, append server, %% implicit sequencer.
%% file management and file proxy processes.
%% Please see the EDoc "Overview" for details about the FLU as a %% Please see the EDoc "Overview" for details about the FLU as a
%% primitive file server process vs. the larger Machi design of a FLU %% primitive file server process vs. the larger Machi design of a FLU
%% as a sequencer + file server + chain manager group of processes. %% as a sequencer + file server + chain manager group of processes.
@ -56,29 +54,34 @@
-ifdef(TEST). -ifdef(TEST).
-include_lib("eunit/include/eunit.hrl"). -include_lib("eunit/include/eunit.hrl").
-export([timing_demo_test_COMMENTED_/0, sort_2lines/2]). % Just to suppress warning
-endif. % TEST -endif. % TEST
-define(SERVER_CMD_READ_TIMEOUT, 600*1000).
-export([start_link/1, stop/1, -export([start_link/1, stop/1,
update_wedge_state/3, wedge_myself/2]). update_wedge_state/3, wedge_myself/2]).
-export([make_projection_server_regname/1, -export([make_listener_regname/1, make_projection_server_regname/1]).
ets_table_name/1]).
%% TODO: remove or replace in OTP way after gen_*'ified
-export([main2/4]).
-define(INIT_TIMEOUT, 60*1000). -record(state, {
flu_name :: atom(),
proj_store :: pid(),
witness = false :: boolean(),
append_pid :: pid(),
tcp_port :: non_neg_integer(),
data_dir :: string(),
wedged = true :: boolean(),
etstab :: ets:tid(),
epoch_id :: 'undefined' | machi_dt:epoch_id(),
pb_mode = undefined :: 'undefined' | 'high' | 'low',
high_clnt :: 'undefined' | pid(),
props = [] :: list() % proplist
}).
start_link([{FluName, TcpPort, DataDir}|Rest]) start_link([{FluName, TcpPort, DataDir}|Rest])
when is_atom(FluName), is_integer(TcpPort), is_list(DataDir) -> when is_atom(FluName), is_integer(TcpPort), is_list(DataDir) ->
proc_lib:start_link(?MODULE, main2, [FluName, TcpPort, DataDir, Rest], {ok, spawn_link(fun() -> main2(FluName, TcpPort, DataDir, Rest) end)}.
?INIT_TIMEOUT).
stop(RegName) when is_atom(RegName) -> stop(Pid) ->
case whereis(RegName) of
undefined -> ok;
Pid -> stop(Pid)
end;
stop(Pid) when is_pid(Pid) ->
case erlang:is_process_alive(Pid) of case erlang:is_process_alive(Pid) of
true -> true ->
Pid ! killme, Pid ! killme,
@ -87,14 +90,19 @@ stop(Pid) when is_pid(Pid) ->
error error
end. end.
update_wedge_state(PidSpec, Boolean, EpochId) -> update_wedge_state(PidSpec, Boolean, EpochId)
machi_flu1_append_server:int_update_wedge_state(PidSpec, Boolean, EpochId). when (Boolean == true orelse Boolean == false), is_tuple(EpochId) ->
PidSpec ! {wedge_state_change, Boolean, EpochId}.
wedge_myself(PidSpec, EpochId) -> wedge_myself(PidSpec, EpochId)
machi_flu1_append_server:int_wedge_myself(PidSpec, EpochId). when is_tuple(EpochId) ->
PidSpec ! {wedge_myself, EpochId}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%
ets_table_name(FluName) when is_atom(FluName) ->
list_to_atom(atom_to_list(FluName) ++ "_epoch").
main2(FluName, TcpPort, DataDir, Props) -> main2(FluName, TcpPort, DataDir, Props) ->
{SendAppendPidToProj_p, ProjectionPid} = {SendAppendPidToProj_p, ProjectionPid} =
case proplists:get_value(projection_store_registered_name, Props) of case proplists:get_value(projection_store_registered_name, Props) of
@ -121,17 +129,27 @@ main2(FluName, TcpPort, DataDir, Props) ->
{true, undefined} {true, undefined}
end, end,
Witness_p = proplists:get_value(witness_mode, Props, false), Witness_p = proplists:get_value(witness_mode, Props, false),
S0 = #state{flu_name=FluName,
{ok, AppendPid} = start_append_server(FluName, Witness_p, Wedged_p, EpochId), proj_store=ProjectionPid,
tcp_port=TcpPort,
data_dir=DataDir,
wedged=Wedged_p,
witness=Witness_p,
etstab=ets_table_name(FluName),
epoch_id=EpochId,
props=Props},
AppendPid = start_append_server(S0, self()),
receive
append_server_ack -> ok
end,
if SendAppendPidToProj_p -> if SendAppendPidToProj_p ->
machi_projection_store:set_wedge_notify_pid(ProjectionPid, AppendPid); machi_projection_store:set_wedge_notify_pid(ProjectionPid,
AppendPid);
true -> true ->
ok ok
end, end,
{ok, ListenerPid} = start_listen_server(FluName, TcpPort, Witness_p, DataDir, S1 = S0#state{append_pid=AppendPid},
ets_table_name(FluName), ProjectionPid, ListenPid = start_listen_server(S1),
Props),
%% io:format(user, "Listener started: ~w~n", [{FluName, ListenerPid}]),
Config_e = machi_util:make_config_filename(DataDir, "unused"), Config_e = machi_util:make_config_filename(DataDir, "unused"),
ok = filelib:ensure_dir(Config_e), ok = filelib:ensure_dir(Config_e),
@ -141,24 +159,540 @@ main2(FluName, TcpPort, DataDir, Props) ->
ok = filelib:ensure_dir(Projection_e), ok = filelib:ensure_dir(Projection_e),
put(flu_flu_name, FluName), put(flu_flu_name, FluName),
put(flu_append_pid, AppendPid), put(flu_append_pid, S1#state.append_pid),
put(flu_projection_pid, ProjectionPid), put(flu_projection_pid, ProjectionPid),
put(flu_listen_pid, ListenerPid), put(flu_listen_pid, ListenPid),
proc_lib:init_ack({ok, self()}),
receive killme -> ok end, receive killme -> ok end,
(catch exit(AppendPid, kill)), (catch exit(S1#state.append_pid, kill)),
(catch exit(ProjectionPid, kill)), (catch exit(ProjectionPid, kill)),
(catch exit(ListenerPid, kill)), (catch exit(ListenPid, kill)),
ok. ok.
start_append_server(FluName, Witness_p, Wedged_p, EpochId) -> start_listen_server(S) ->
machi_flu1_subsup:start_append_server(FluName, Witness_p, Wedged_p, EpochId). proc_lib:spawn_link(fun() -> run_listen_server(S) end).
start_listen_server(FluName, TcpPort, Witness_p, DataDir, EtsTab, ProjectionPid, start_append_server(S, AckPid) ->
Props) -> FluPid = self(),
machi_flu1_subsup:start_listener(FluName, TcpPort, Witness_p, DataDir, proc_lib:spawn_link(fun() -> run_append_server(FluPid, AckPid, S) end).
EtsTab, ProjectionPid, Props).
run_listen_server(#state{flu_name=FluName, tcp_port=TcpPort}=S) ->
register(make_listener_regname(FluName), self()),
SockOpts = ?PB_PACKET_OPTS ++
[{reuseaddr, true}, {mode, binary}, {active, false},
{backlog,8192}],
case gen_tcp:listen(TcpPort, SockOpts) of
{ok, LSock} ->
listen_server_loop(LSock, S);
Else ->
error_logger:warning_msg("~s:run_listen_server: "
"listen to TCP port ~w: ~w\n",
[?MODULE, TcpPort, Else]),
exit({?MODULE, run_listen_server, tcp_port, TcpPort, Else})
end.
run_append_server(FluPid, AckPid, #state{flu_name=Name,
wedged=Wedged_p,epoch_id=EpochId}=S) ->
%% Reminder: Name is the "main" name of the FLU, i.e., no suffix
register(Name, self()),
TID = ets:new(ets_table_name(Name),
[set, protected, named_table, {read_concurrency, true}]),
ets:insert(TID, {epoch, {Wedged_p, EpochId}}),
AckPid ! append_server_ack,
append_server_loop(FluPid, S#state{etstab=TID}).
listen_server_loop(LSock, S) ->
{ok, Sock} = gen_tcp:accept(LSock),
spawn_link(fun() -> net_server_loop(Sock, S) end),
listen_server_loop(LSock, S).
append_server_loop(FluPid, #state{wedged=Wedged_p,
witness=Witness_p,
epoch_id=OldEpochId, flu_name=FluName}=S) ->
receive
{seq_append, From, _Prefix, _Chunk, _CSum, _Extra, _EpochID}
when Witness_p ->
%% The FLU's net_server_loop() process ought to filter all
%% witness states, but we'll keep this clause for extra
%% paranoia.
From ! witness,
append_server_loop(FluPid, S);
{seq_append, From, _Prefix, _Chunk, _CSum, _Extra, _EpochID}
when Wedged_p ->
From ! wedged,
append_server_loop(FluPid, S);
{seq_append, From, Prefix, Chunk, CSum, Extra, EpochID} ->
%% Old is the one from our state, plain old 'EpochID' comes
%% from the client.
case OldEpochId == EpochID of
true ->
spawn(fun() ->
append_server_dispatch(From, Prefix, Chunk, CSum, Extra, FluName, EpochID)
end);
false ->
From ! {error, bad_epoch}
end,
append_server_loop(FluPid, S);
{wedge_myself, WedgeEpochId} ->
if not Wedged_p andalso WedgeEpochId == OldEpochId ->
true = ets:insert(S#state.etstab,
{epoch, {true, OldEpochId}}),
%% Tell my chain manager that it might want to react to
%% this new world.
Chmgr = machi_chain_manager1:make_chmgr_regname(FluName),
spawn(fun() ->
catch machi_chain_manager1:trigger_react_to_env(Chmgr)
end),
append_server_loop(FluPid, S#state{wedged=true});
true ->
append_server_loop(FluPid, S)
end;
{wedge_state_change, Boolean, {NewEpoch, _}=NewEpochId} ->
OldEpoch = case OldEpochId of {OldE, _} -> OldE;
undefined -> -1
end,
if NewEpoch >= OldEpoch ->
true = ets:insert(S#state.etstab,
{epoch, {Boolean, NewEpochId}}),
append_server_loop(FluPid, S#state{wedged=Boolean,
epoch_id=NewEpochId});
true ->
append_server_loop(FluPid, S)
end;
{wedge_status, FromPid} ->
#state{wedged=Wedged_p, epoch_id=EpochId} = S,
FromPid ! {wedge_status_reply, Wedged_p, EpochId},
append_server_loop(FluPid, S);
Else ->
io:format(user, "append_server_loop: WHA? ~p\n", [Else]),
append_server_loop(FluPid, S)
end.
net_server_loop(Sock, S) ->
case gen_tcp:recv(Sock, 0, ?SERVER_CMD_READ_TIMEOUT) of
{ok, Bin} ->
{RespBin, S2} =
case machi_pb:decode_mpb_ll_request(Bin) of
LL_req when LL_req#mpb_ll_request.do_not_alter == 2 ->
{R, NewS} = do_pb_ll_request(LL_req, S),
{maybe_encode_response(R), mode(low, NewS)};
_ ->
HL_req = machi_pb:decode_mpb_request(Bin),
1 = HL_req#mpb_request.do_not_alter,
{R, NewS} = do_pb_hl_request(HL_req, make_high_clnt(S)),
{machi_pb:encode_mpb_response(R), mode(high, NewS)}
end,
if RespBin == async_no_response ->
net_server_loop(Sock, S2);
true ->
case gen_tcp:send(Sock, RespBin) of
ok ->
net_server_loop(Sock, S2);
{error, _} ->
(catch gen_tcp:close(Sock)),
exit(normal)
end
end;
{error, SockError} ->
Msg = io_lib:format("Socket error ~w", [SockError]),
R = #mpb_ll_response{req_id= <<>>,
generic=#mpb_errorresp{code=1, msg=Msg}},
_Resp = machi_pb:encode_mpb_ll_response(R),
%% TODO: Weird that sometimes neither catch nor try/catch
%% can prevent OTP's SASL from logging an error here.
%% Error in process <0.545.0> with exit value: {badarg,[{erlang,port_command,.......
%% TODO: is this what causes the intermittent PULSE deadlock errors?
%% _ = (catch gen_tcp:send(Sock, _Resp)), timer:sleep(1000),
(catch gen_tcp:close(Sock)),
exit(normal)
end.
maybe_encode_response(async_no_response=X) ->
X;
maybe_encode_response(R) ->
machi_pb:encode_mpb_ll_response(R).
mode(Mode, #state{pb_mode=undefined}=S) ->
S#state{pb_mode=Mode};
mode(_, S) ->
S.
make_high_clnt(#state{high_clnt=undefined}=S) ->
{ok, Proj} = machi_projection_store:read_latest_projection(
S#state.proj_store, private),
Ps = [P_srvr || {_, P_srvr} <- orddict:to_list(
Proj#projection_v1.members_dict)],
{ok, Clnt} = machi_cr_client:start_link(Ps),
S#state{high_clnt=Clnt};
make_high_clnt(S) ->
S.
do_pb_ll_request(#mpb_ll_request{req_id=ReqID}, #state{pb_mode=high}=S) ->
Result = {high_error, 41, "Low protocol request while in high mode"},
{machi_pb_translate:to_pb_response(ReqID, unused, Result), S};
do_pb_ll_request(PB_request, S) ->
Req = machi_pb_translate:from_pb_request(PB_request),
{ReqID, Cmd, Result, S2} =
case Req of
{RqID, {LowCmd, _}=CMD}
when LowCmd == low_proj;
LowCmd == low_wedge_status; LowCmd == low_list_files ->
%% Skip wedge check for projection commands!
%% Skip wedge check for these unprivileged commands
{Rs, NewS} = do_pb_ll_request3(CMD, S),
{RqID, CMD, Rs, NewS};
{RqID, CMD} ->
EpochID = element(2, CMD), % by common convention
{Rs, NewS} = do_pb_ll_request2(EpochID, CMD, S),
{RqID, CMD, Rs, NewS}
end,
{machi_pb_translate:to_pb_response(ReqID, Cmd, Result), S2}.
do_pb_ll_request2(EpochID, CMD, S) ->
{Wedged_p, CurrentEpochID} = ets:lookup_element(S#state.etstab, epoch, 2),
if Wedged_p == true ->
{{error, wedged}, S#state{epoch_id=CurrentEpochID}};
is_tuple(EpochID)
andalso
EpochID /= CurrentEpochID ->
{Epoch, _} = EpochID,
{CurrentEpoch, _} = CurrentEpochID,
if Epoch < CurrentEpoch ->
ok;
true ->
%% We're at same epoch # but different checksum, or
%% we're at a newer/bigger epoch #.
wedge_myself(S#state.flu_name, CurrentEpochID),
ok
end,
{{error, bad_epoch}, S#state{epoch_id=CurrentEpochID}};
true ->
do_pb_ll_request3(CMD, S#state{epoch_id=CurrentEpochID})
end.
%% Witness status does not matter below.
do_pb_ll_request3({low_echo, _BogusEpochID, Msg}, S) ->
{Msg, S};
do_pb_ll_request3({low_auth, _BogusEpochID, _User, _Pass}, S) ->
{-6, S};
do_pb_ll_request3({low_wedge_status, _EpochID}, S) ->
{do_server_wedge_status(S), S};
do_pb_ll_request3({low_proj, PCMD}, S) ->
{do_server_proj_request(PCMD, S), S};
%% Witness status *matters* below
do_pb_ll_request3({low_append_chunk, _EpochID, PKey, Prefix, Chunk, CSum_tag,
CSum, ChunkExtra},
#state{witness=false}=S) ->
{do_server_append_chunk(PKey, Prefix, Chunk, CSum_tag, CSum,
ChunkExtra, S), S};
do_pb_ll_request3({low_write_chunk, _EpochID, File, Offset, Chunk, CSum_tag,
CSum},
#state{witness=false}=S) ->
{do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, S), S};
do_pb_ll_request3({low_read_chunk, _EpochID, File, Offset, Size, Opts},
#state{witness=false}=S) ->
{do_server_read_chunk(File, Offset, Size, Opts, S), S};
do_pb_ll_request3({low_checksum_list, _EpochID, File},
#state{witness=false}=S) ->
{do_server_checksum_listing(File, S), S};
do_pb_ll_request3({low_list_files, _EpochID},
#state{witness=false}=S) ->
{do_server_list_files(S), S};
do_pb_ll_request3({low_delete_migration, _EpochID, File},
#state{witness=false}=S) ->
{do_server_delete_migration(File, S),
#state{witness=false}=S};
do_pb_ll_request3({low_trunc_hack, _EpochID, File},
#state{witness=false}=S) ->
{do_server_trunc_hack(File, S), S};
do_pb_ll_request3(_, #state{witness=true}=S) ->
{{error, bad_arg}, S}. % TODO: new status code??
do_pb_hl_request(#mpb_request{req_id=ReqID}, #state{pb_mode=low}=S) ->
Result = {low_error, 41, "High protocol request while in low mode"},
{machi_pb_translate:to_pb_response(ReqID, unused, Result), S};
do_pb_hl_request(PB_request, S) ->
{ReqID, Cmd} = machi_pb_translate:from_pb_request(PB_request),
{Result, S2} = do_pb_hl_request2(Cmd, S),
{machi_pb_translate:to_pb_response(ReqID, Cmd, Result), S2}.
do_pb_hl_request2({high_echo, Msg}, S) ->
{Msg, S};
do_pb_hl_request2({high_auth, _User, _Pass}, S) ->
{-77, S};
do_pb_hl_request2({high_append_chunk, _todoPK, Prefix, ChunkBin, TaggedCSum,
ChunkExtra}, #state{high_clnt=Clnt}=S) ->
Chunk = {TaggedCSum, ChunkBin},
Res = machi_cr_client:append_chunk_extra(Clnt, Prefix, Chunk,
ChunkExtra),
{Res, S};
do_pb_hl_request2({high_write_chunk, File, Offset, ChunkBin, TaggedCSum},
#state{high_clnt=Clnt}=S) ->
Chunk = {TaggedCSum, ChunkBin},
Res = machi_cr_client:write_chunk(Clnt, File, Offset, Chunk),
{Res, S};
do_pb_hl_request2({high_read_chunk, File, Offset, Size},
#state{high_clnt=Clnt}=S) ->
Res = machi_cr_client:read_chunk(Clnt, File, Offset, Size),
{Res, S};
do_pb_hl_request2({high_trim_chunk, File, Offset, Size},
#state{high_clnt=Clnt}=S) ->
Res = machi_cr_client:trim_chunk(Clnt, File, Offset, Size),
{Res, S};
do_pb_hl_request2({high_checksum_list, File}, #state{high_clnt=Clnt}=S) ->
Res = machi_cr_client:checksum_list(Clnt, File),
{Res, S};
do_pb_hl_request2({high_list_files}, #state{high_clnt=Clnt}=S) ->
Res = machi_cr_client:list_files(Clnt),
{Res, S}.
do_server_proj_request({get_latest_epochid, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:get_latest_epochid(ProjStore, ProjType);
do_server_proj_request({read_latest_projection, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:read_latest_projection(ProjStore, ProjType);
do_server_proj_request({read_projection, ProjType, Epoch},
#state{proj_store=ProjStore}) ->
machi_projection_store:read(ProjStore, ProjType, Epoch);
do_server_proj_request({write_projection, ProjType, Proj},
#state{flu_name=FluName, proj_store=ProjStore}) ->
if Proj#projection_v1.epoch_number == ?SPAM_PROJ_EPOCH ->
%% io:format(user, "DBG ~s ~w ~P\n", [?MODULE, ?LINE, Proj, 5]),
Chmgr = machi_flu_psup:make_fitness_regname(FluName),
[Map] = Proj#projection_v1.dbg,
catch machi_fitness:send_fitness_update_spam(
Chmgr, Proj#projection_v1.author_server, Map);
true ->
catch machi_projection_store:write(ProjStore, ProjType, Proj)
end;
do_server_proj_request({get_all_projections, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:get_all_projections(ProjStore, ProjType);
do_server_proj_request({list_all_projections, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:list_all_projections(ProjStore, ProjType);
do_server_proj_request({kick_projection_reaction},
#state{flu_name=FluName}) ->
%% Tell my chain manager that it might want to react to
%% this new world.
Chmgr = machi_chain_manager1:make_chmgr_regname(FluName),
spawn(fun() ->
catch machi_chain_manager1:trigger_react_to_env(Chmgr)
end),
async_no_response.
do_server_append_chunk(PKey, Prefix, Chunk, CSum_tag, CSum,
ChunkExtra, S) ->
case sanitize_prefix(Prefix) of
ok ->
do_server_append_chunk2(PKey, Prefix, Chunk, CSum_tag, CSum,
ChunkExtra, S);
_ ->
{error, bad_arg}
end.
do_server_append_chunk2(_PKey, Prefix, Chunk, CSum_tag, Client_CSum,
ChunkExtra, #state{flu_name=FluName,
epoch_id=EpochID}=_S) ->
%% TODO: Do anything with PKey?
try
TaggedCSum = check_or_make_tagged_checksum(CSum_tag, Client_CSum,Chunk),
R = {seq_append, self(), Prefix, Chunk, TaggedCSum, ChunkExtra, EpochID},
FluName ! R,
receive
{assignment, Offset, File} ->
Size = iolist_size(Chunk),
{ok, {Offset, Size, File}};
witness ->
{error, bad_arg};
wedged ->
{error, wedged}
after 10*1000 ->
{error, partition}
end
catch
throw:{bad_csum, _CS} ->
{error, bad_checksum};
error:badarg ->
error_logger:error_msg("Message send to ~p gave badarg, make certain server is running with correct registered name\n", [?MODULE]),
{error, bad_arg}
end.
do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, #state{flu_name=FluName}) ->
case sanitize_file_string(File) of
ok ->
{ok, Pid} = machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}),
Meta = [{client_csum_tag, CSum_tag}, {client_csum, CSum}],
machi_file_proxy:write(Pid, Offset, Meta, Chunk);
_ ->
{error, bad_arg}
end.
do_server_read_chunk(File, Offset, Size, _Opts, #state{flu_name=FluName})->
%% TODO: Look inside Opts someday.
case sanitize_file_string(File) of
ok ->
{ok, Pid} = machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}),
case machi_file_proxy:read(Pid, Offset, Size) of
%% XXX FIXME
%% For now we are omiting the checksum data because it blows up
%% protobufs.
{ok, Data, _Csum} -> {ok, Data};
Other -> Other
end;
_ ->
{error, bad_arg}
end.
do_server_checksum_listing(File, #state{flu_name=FluName, data_dir=DataDir}=_S) ->
case sanitize_file_string(File) of
ok ->
ok = sync_checksum_file(FluName, File),
CSumPath = machi_util:make_checksum_filename(DataDir, File),
%% TODO: If this file is legitimately bigger than our
%% {packet_size,N} limit, then we'll have a difficult time, eh?
case file:read_file(CSumPath) of
{ok, Bin} ->
if byte_size(Bin) > (?PB_MAX_MSG_SIZE - 1024) ->
%% TODO: Fix this limitation by streaming the
%% binary in multiple smaller PB messages.
%% Also, don't read the file all at once. ^_^
error_logger:error_msg("~s:~w oversize ~s\n",
[?MODULE, ?LINE, CSumPath]),
{error, bad_arg};
true ->
{ok, Bin}
end;
{error, enoent} ->
{error, no_such_file};
{error, _} ->
{error, bad_arg}
end;
_ ->
{error, bad_arg}
end.
do_server_list_files(#state{data_dir=DataDir}=_S) ->
{_, WildPath} = machi_util:make_data_filename(DataDir, ""),
Files = filelib:wildcard("*", WildPath),
{ok, [begin
{ok, FI} = file:read_file_info(WildPath ++ "/" ++ File),
Size = FI#file_info.size,
{Size, File}
end || File <- Files]}.
do_server_wedge_status(S) ->
{Wedged_p, CurrentEpochID0} = ets:lookup_element(S#state.etstab, epoch, 2),
CurrentEpochID = if CurrentEpochID0 == undefined ->
?DUMMY_PV1_EPOCH;
true ->
CurrentEpochID0
end,
{Wedged_p, CurrentEpochID}.
do_server_delete_migration(File, #state{data_dir=DataDir}=_S) ->
case sanitize_file_string(File) of
ok ->
{_, Path} = machi_util:make_data_filename(DataDir, File),
case file:delete(Path) of
ok ->
ok;
{error, enoent} ->
{error, no_such_file};
_ ->
{error, bad_arg}
end;
_ ->
{error, bad_arg}
end.
do_server_trunc_hack(File, #state{data_dir=DataDir}=_S) ->
case sanitize_file_string(File) of
ok ->
{_, Path} = machi_util:make_data_filename(DataDir, File),
case file:open(Path, [read, write, binary, raw]) of
{ok, FH} ->
try
{ok, ?MINIMUM_OFFSET} = file:position(FH,
?MINIMUM_OFFSET),
ok = file:truncate(FH),
ok
after
file:close(FH)
end;
{error, enoent} ->
{error, no_such_file};
_ ->
{error, bad_arg}
end;
_ ->
{error, bad_arg}
end.
append_server_dispatch(From, Prefix, Chunk, CSum, Extra, FluName, EpochId) ->
Result = case handle_append(Prefix, Chunk, CSum, Extra, FluName, EpochId) of
{ok, File, Offset} ->
{assignment, Offset, File};
Other ->
Other
end,
From ! Result,
exit(normal).
handle_append(_Prefix, <<>>, _Csum, _Extra, _FluName, _EpochId) ->
{error, bad_arg};
handle_append(Prefix, Chunk, Csum, Extra, FluName, EpochId) ->
Res = machi_flu_filename_mgr:find_or_make_filename_from_prefix(FluName, EpochId, {prefix, Prefix}),
case Res of
{file, F} ->
{ok, Pid} = machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, F}),
{Tag, CS} = machi_util:unmake_tagged_csum(Csum),
Meta = [{client_csum_tag, Tag}, {client_csum, CS}],
machi_file_proxy:append(Pid, Meta, Extra, Chunk);
Error ->
Error
end.
sanitize_file_string(Str) ->
case has_no_prohibited_chars(Str) andalso machi_util:is_valid_filename(Str) of
true -> ok;
false -> error
end.
has_no_prohibited_chars(Str) ->
case re:run(Str, "/") of
nomatch ->
true;
_ ->
true
end.
sanitize_prefix(Prefix) ->
%% We are using '^' as our component delimiter
case re:run(Prefix, "/|\\^") of
nomatch ->
ok;
_ ->
error
end.
sync_checksum_file(FluName, File) ->
%% We just lookup the pid here - we don't start a proxy server. If
%% there isn't a pid for this file, then we just return ok. The
%% csum file was synced when the proxy was shutdown.
%%
%% If there *is* a pid, we call the sync function to ensure the
%% csum file is sync'd before we return. (Or an error if we get
%% an error).
case machi_flu_metadata_mgr:lookup_proxy_pid(FluName, {file, File}) of
undefined ->
ok;
Pid ->
machi_file_proxy:sync(Pid, csum)
end.
make_listener_regname(BaseName) ->
list_to_atom(atom_to_list(BaseName) ++ "_listener").
%% This is the name of the projection store that is spawned by the %% This is the name of the projection store that is spawned by the
%% *flu*, for use primarily in testing scenarios. In normal use, we %% *flu*, for use primarily in testing scenarios. In normal use, we
@ -170,8 +704,26 @@ start_listen_server(FluName, TcpPort, Witness_p, DataDir, EtsTab, ProjectionPid,
make_projection_server_regname(BaseName) -> make_projection_server_regname(BaseName) ->
list_to_atom(atom_to_list(BaseName) ++ "_pstore"). list_to_atom(atom_to_list(BaseName) ++ "_pstore").
ets_table_name(FluName) when is_atom(FluName) -> check_or_make_tagged_checksum(?CSUM_TAG_NONE, _Client_CSum, Chunk) ->
list_to_atom(atom_to_list(FluName) ++ "_epoch"). %% TODO: If the client was foolish enough to use
%% this type of non-checksum, then the client gets
%% what it deserves wrt data integrity, alas. In
%% the client-side Chain Replication method, each
%% server will calculated this independently, which
%% isn't exactly what ought to happen for best data
%% integrity checking. In server-side CR, the csum
%% should be calculated by the head and passed down
%% the chain together with the value.
CS = machi_util:checksum_chunk(Chunk),
machi_util:make_tagged_csum(server_sha, CS);
check_or_make_tagged_checksum(?CSUM_TAG_CLIENT_SHA, Client_CSum, Chunk) ->
CS = machi_util:checksum_chunk(Chunk),
if CS == Client_CSum ->
machi_util:make_tagged_csum(server_sha,
Client_CSum);
true ->
throw({bad_csum, CS})
end.
-ifdef(TEST). -ifdef(TEST).
@ -213,7 +765,7 @@ timing_demo_test2() ->
lists:foldl(fun(X, _) -> lists:foldl(fun(X, _) ->
B = machi_checksums:encode_csum_file_entry_hex(X, 100, CSum), B = machi_checksums:encode_csum_file_entry_hex(X, 100, CSum),
%% file:write(ZZZ, [B, 10]), %% file:write(ZZZ, [B, 10]),
decode_csum_file_entry_hex(list_to_binary(B)) machi_checksums:decode_csum_file_entry_hex(list_to_binary(B))
end, x, Xs) end, x, Xs)
end), end),
io:format(user, "~.3f sec\n", [HexUSec / 1000000]), io:format(user, "~.3f sec\n", [HexUSec / 1000000]),

View file

@ -1,193 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc Machi FLU1 append serialization server process
-module(machi_flu1_append_server).
-behavior(gen_server).
-include("machi.hrl").
-include("machi_projection.hrl").
-ifdef(TEST).
-include_lib("eunit/include/eunit.hrl").
-endif. % TEST
-export([start_link/4]).
-export([init/1]).
-export([handle_call/3, handle_cast/2, handle_info/2,
terminate/2, code_change/3]).
-export([int_update_wedge_state/3, int_wedge_myself/2]).
-export([current_state/1, format_state/1]).
-record(state, {
flu_name :: atom(),
witness = false :: boolean(),
wedged = true :: boolean(),
etstab :: ets:tid(),
epoch_id :: 'undefined' | machi_dt:epoch_id()
}).
-define(INIT_TIMEOUT, 60*1000).
-define(CALL_TIMEOUT, 60*1000).
-spec start_link(pv1_server(), boolean(), boolean(),
undefined | machi_dt:epoch_id()) -> {ok, pid()}.
start_link(Fluname, Witness_p, Wedged_p, EpochId) ->
%% Reminder: Name is the "main" name of the FLU, i.e., no suffix
gen_server:start_link({local, Fluname},
?MODULE, [Fluname, Witness_p, Wedged_p, EpochId],
[{timeout, ?INIT_TIMEOUT}]).
-spec current_state(atom() | pid()) -> term().
current_state(PidSpec) ->
gen_server:call(PidSpec, current_state, ?CALL_TIMEOUT).
format_state(State) ->
Fields = record_info(fields, state),
[_Name | Values] = tuple_to_list(State),
lists:zip(Fields, Values).
int_update_wedge_state(PidSpec, Boolean, EpochId)
when is_boolean(Boolean), is_tuple(EpochId) ->
gen_server:cast(PidSpec, {wedge_state_change, Boolean, EpochId}).
int_wedge_myself(PidSpec, EpochId)
when is_tuple(EpochId) ->
gen_server:cast(PidSpec, {wedge_myself, EpochId}).
init([Fluname, Witness_p, Wedged_p, EpochId]) ->
TID = ets:new(machi_flu1:ets_table_name(Fluname),
[set, protected, named_table, {read_concurrency, true}]),
ets:insert(TID, {epoch, {Wedged_p, EpochId}}),
{ok, #state{flu_name=Fluname, witness=Witness_p, wedged=Wedged_p,
etstab=TID, epoch_id=EpochId}}.
handle_call({seq_append, _From2, _NSInfo, _EpochID, _Prefix, _Chunk, _TCSum, _Opts},
_From, #state{witness=true}=S) ->
%% The FLU's machi_flu1_net_server process ought to filter all
%% witness states, but we'll keep this clause for extra
%% paranoia.
{reply, witness, S};
handle_call({seq_append, _From2, _NSInfo, _EpochID, _Prefix, _Chunk, _TCSum, _Opts},
_From, #state{wedged=true}=S) ->
{reply, wedged, S};
handle_call({seq_append, _From2, NSInfo, EpochID,
Prefix, Chunk, TCSum, Opts},
From, #state{flu_name=FluName, epoch_id=OldEpochId}=S) ->
%% Old is the one from our state, plain old 'EpochID' comes
%% from the client.
_ = case OldEpochId of
EpochID ->
spawn(fun() ->
append_server_dispatch(From, NSInfo,
Prefix, Chunk, TCSum, Opts,
FluName, EpochID)
end),
{noreply, S};
_ ->
{reply, {error, bad_epoch}, S}
end;
%% TODO: Who sends this message?
handle_call(wedge_status, _From,
#state{wedged=Wedged_p, epoch_id=EpochId} = S) ->
{reply, {wedge_status_reply, Wedged_p, EpochId}, S};
handle_call(current_state, _From, S) ->
{reply, S, S};
handle_call(Else, From, S) ->
io:format(user, "~s:handle_call: WHA? from=~w ~w\n", [?MODULE, From, Else]),
{noreply, S}.
handle_cast({wedge_myself, WedgeEpochId},
#state{flu_name=FluName, wedged=Wedged_p, epoch_id=OldEpochId}=S) ->
if not Wedged_p andalso WedgeEpochId == OldEpochId ->
true = ets:insert(S#state.etstab,
{epoch, {true, OldEpochId}}),
%% Tell my chain manager that it might want to react to
%% this new world.
Chmgr = machi_chain_manager1:make_chmgr_regname(FluName),
spawn(fun() ->
catch machi_chain_manager1:trigger_react_to_env(Chmgr)
end),
{noreply, S#state{wedged=true}};
true ->
{noreply, S}
end;
handle_cast({wedge_state_change, Boolean, {NewEpoch, _}=NewEpochId},
#state{epoch_id=OldEpochId}=S) ->
OldEpoch = case OldEpochId of {OldE, _} -> OldE;
undefined -> -1
end,
if NewEpoch >= OldEpoch ->
true = ets:insert(S#state.etstab,
{epoch, {Boolean, NewEpochId}}),
{noreply, S#state{wedged=Boolean, epoch_id=NewEpochId}};
true ->
{noreply, S}
end;
handle_cast(Else, S) ->
io:format(user, "~s:handle_cast: WHA? ~p\n", [?MODULE, Else]),
{noreply, S}.
handle_info(Else, S) ->
io:format(user, "~s:handle_info: WHA? ~p\n", [?MODULE, Else]),
{noreply, S}.
terminate(normal, _S) ->
ok;
terminate(Reason, _S) ->
lager:warning("~s:terminate: ~w", [?MODULE, Reason]),
ok.
code_change(_OldVsn, S, _Extra) ->
{ok, S}.
append_server_dispatch(From, NSInfo,
Prefix, Chunk, TCSum, Opts, FluName, EpochId) ->
Result = case handle_append(NSInfo,
Prefix, Chunk, TCSum, Opts, FluName, EpochId) of
{ok, File, Offset} ->
{assignment, Offset, File};
Other ->
Other
end,
_ = gen_server:reply(From, Result),
ok.
handle_append(NSInfo,
Prefix, Chunk, TCSum, Opts, FluName, EpochId) ->
Res = machi_flu_filename_mgr:find_or_make_filename_from_prefix(
FluName, EpochId, {prefix, Prefix}, NSInfo),
case Res of
{file, F} ->
case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, F}) of
{ok, Pid} ->
{Tag, CS} = machi_util:unmake_tagged_csum(TCSum),
Meta = [{client_csum_tag, Tag}, {client_csum, CS}],
Extra = Opts#append_opts.chunk_extra,
machi_file_proxy:append(Pid, Meta, Extra, Chunk);
{error, trimmed} = E ->
E
end;
Error ->
Error
end.

View file

@ -38,71 +38,6 @@
%% TODO This EDoc was written first, and the EDoc and also `-type' and %% TODO This EDoc was written first, and the EDoc and also `-type' and
%% `-spec' definitions for {@link machi_proxy_flu1_client} and {@link %% `-spec' definitions for {@link machi_proxy_flu1_client} and {@link
%% machi_cr_client} must be improved. %% machi_cr_client} must be improved.
%%
%% == Client API implementation notes ==
%%
%% At the moment, there are several modules that implement various
%% subsets of the Machi API. The table below attempts to show how and
%% why they differ.
%%
%% ```
%% |--------------------------+-------+-----+------+------+-------+----------------|
%% | | PB | | # | | Conn | Epoch & NS |
%% | Module name | Level | CR? | FLUS | Impl | Life? | version aware? |
%% |--------------------------+-------+-----+------+------+-------+----------------|
%% | machi_pb_high_api_client | high | yes | many | proc | long | no |
%% | machi_cr_client | low | yes | many | proc | long | no |
%% | machi_proxy_flu1_client | low | no | 1 | proc | long | yes |
%% | machi_flu1_client | low | no | 1 | lib | short | yes |
%% |--------------------------+-------+-----+------+------+-------+----------------|
%% '''
%%
%% In terms of use and API layering, the table rows are in highest`->'lowest
%% order: each level calls the layer immediately below it.
%%
%% <dl>
%% <dt> <b> PB Level</b> </dt>
%% <dd> The Protocol Buffers API is divided logically into two levels,
%% "low" and "high". The low-level protocol is used for intra-chain
%% communication. The high-level protocol is used for clients outside
%% of a Machi chain or Machi cluster of chains.
%% </dd>
%% <dt> <b> CR?</b> </dt>
%% <dd> Does this API support (directly or indirectly) Chain
%% Replication? If `no', then the API has no awareness of multiple
%% replicas of any file or file chunk; unaware clients can only
%% perform operations at a single Machi FLU's file service or
%% projection store service.
%% </dd>
%% <dt> <b> # FLUs</b> </dt>
%% <dd> Now many FLUs does this API layer communicate with
%% simultaneously? Note that there is a one-to-one correspondence
%% between this value and the "CR?" column's value.
%% </dd>
%% <dt> <b> Impl</b> </dt>
%% <dd> Implementation: library-only or an Erlang process,
%% e.g., `gen_server'.
%% </dd>
%% <dt> <b> Conn Life?</b> </dt>
%% <dd> Expected TCP session connection life: short or long. At the
%% lowest level, the {@link machi_flu1_client} API implementation takes
%% no effort to reconnect to a remote FLU when its single TCP session
%% is broken. For long-lived connection life APIs, the server side will
%% automatically attempt to reconnect to remote FLUs when a TCP session
%% is broken.
%% </dd>
%% <dt> <b> Epoch &amp; NS version aware?</b> </dt>
%% <dd> Are clients of this API responsible for knowing a chain's EpochID
%% and namespace version numbers? If `no', then the server side of the
%% API will automatically attempt to discover/re-discover the EpochID and
%% namespace version numbers whenever they change.
%% </dd>
%% </dl>
%%
%% The only protocol that we expect to be used by entities outside of
%% a single Machi chain or a multi-chain cluster is the "high"
%% Protocol Buffers API. The {@link riak_pb_high_api_client} module
%% is an Erlang reference implementation of this PB API.
-module(machi_flu1_client). -module(machi_flu1_client).
@ -115,15 +50,14 @@
-include_lib("pulse_otp/include/pulse_otp.hrl"). -include_lib("pulse_otp/include/pulse_otp.hrl").
-endif. -endif.
-define(SHORT_TIMEOUT, 2500). -define(HARD_TIMEOUT, 2500).
-define(LONG_TIMEOUT, (60*1000)).
-export([ -export([
%% File API %% File API
append_chunk/6, append_chunk/7, append_chunk/4, append_chunk/5,
append_chunk/8, append_chunk/9, append_chunk_extra/5, append_chunk_extra/6,
read_chunk/7, read_chunk/8, read_chunk/5, read_chunk/6,
checksum_list/2, checksum_list/3, checksum_list/3, checksum_list/4,
list_files/2, list_files/3, list_files/2, list_files/3,
wedge_status/1, wedge_status/2, wedge_status/1, wedge_status/2,
@ -145,113 +79,103 @@
]). ]).
%% For "internal" replication only. %% For "internal" replication only.
-export([ -export([
write_chunk/7, write_chunk/8, write_chunk/5, write_chunk/6,
trim_chunk/6,
delete_migration/3, delete_migration/4, delete_migration/3, delete_migration/4,
trunc_hack/3, trunc_hack/4 trunc_hack/3, trunc_hack/4
]). ]).
-type port_wrap() :: {w,atom(),term()}. -type port_wrap() :: {w,atom(),term()}.
-spec append_chunk(port_wrap(), %% @doc Append a chunk (binary- or iolist-style) of data to a file
'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), %% with `Prefix'.
machi_dt:file_prefix(), machi_dt:chunk(),
machi_dt:chunk_csum()) -> -spec append_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk()) ->
{ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}.
append_chunk(Sock, NSInfo, EpochID, Prefix, Chunk, CSum) -> append_chunk(Sock, EpochID, Prefix, Chunk) ->
append_chunk(Sock, NSInfo, EpochID, Prefix, Chunk, CSum, append_chunk2(Sock, EpochID, Prefix, Chunk, 0).
#append_opts{}, ?LONG_TIMEOUT).
%% @doc Append a chunk (binary- or iolist-style) of data to a file %% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' and also request an additional `Extra' bytes. %% with `Prefix'.
%%
%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then
%% the file offsets that follow `Chunk''s position for the following 4K will
%% be reserved by the file sequencer for later write(s) by the
%% `write_chunk()' API.
-spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(), -spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(),
'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk()) ->
machi_dt:file_prefix(), machi_dt:chunk(),
machi_dt:chunk_csum()) ->
{ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}. {ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}.
append_chunk(Host, TcpPort, NSInfo, EpochID, Prefix, Chunk, CSum) -> append_chunk(Host, TcpPort, EpochID, Prefix, Chunk) ->
append_chunk(Host, TcpPort, NSInfo, EpochID, Prefix, Chunk, CSum,
#append_opts{}, ?LONG_TIMEOUT).
-spec append_chunk(port_wrap(),
'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(),
machi_dt:file_prefix(), machi_dt:chunk(),
machi_dt:chunk_csum(), machi_dt:append_opts(), timeout()) ->
{ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}.
append_chunk(Sock, NSInfo0, EpochID, Prefix, Chunk, CSum, Opts, Timeout) ->
NSInfo = machi_util:ns_info_default(NSInfo0),
append_chunk2(Sock, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, Timeout).
%% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' and also request an additional `Extra' bytes.
%%
%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then
%% the file offsets that follow `Chunk''s position for the following 4K will
%% be reserved by the file sequencer for later write(s) by the
%% `write_chunk()' API.
-spec append_chunk(machi_dt:inet_host(), machi_dt:inet_port(),
'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(),
machi_dt:file_prefix(), machi_dt:chunk(),
machi_dt:chunk_csum(), machi_dt:append_opts(), timeout()) ->
{ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}.
append_chunk(Host, TcpPort, NSInfo0, EpochID,
Prefix, Chunk, CSum, Opts, Timeout) ->
Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}),
try try
NSInfo = machi_util:ns_info_default(NSInfo0), append_chunk2(Sock, EpochID, Prefix, Chunk, 0)
append_chunk2(Sock, NSInfo, EpochID, after
Prefix, Chunk, CSum, Opts, Timeout) disconnect(Sock)
end.
%% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' and also request an additional `Extra' bytes.
%%
%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then
%% the file offsets that follow `Chunk''s position for the following 4K will
%% be reserved by the file sequencer for later write(s) by the
%% `write_chunk()' API.
-spec append_chunk_extra(port_wrap(), machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_size()) ->
{ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}.
append_chunk_extra(Sock, EpochID, Prefix, Chunk, ChunkExtra)
when is_integer(ChunkExtra), ChunkExtra >= 0 ->
append_chunk2(Sock, EpochID, Prefix, Chunk, ChunkExtra).
%% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' and also request an additional `Extra' bytes.
%%
%% For example, if the `Chunk' size is 1 KByte and `Extra' is 4K Bytes, then
%% the file offsets that follow `Chunk''s position for the following 4K will
%% be reserved by the file sequencer for later write(s) by the
%% `write_chunk()' API.
-spec append_chunk_extra(machi_dt:inet_host(), machi_dt:inet_port(),
machi_dt:epoch_id(), machi_dt:file_prefix(), machi_dt:chunk(), machi_dt:chunk_size()) ->
{ok, machi_dt:chunk_pos()} | {error, machi_dt:error_general()} | {error, term()}.
append_chunk_extra(Host, TcpPort, EpochID, Prefix, Chunk, ChunkExtra)
when is_integer(ChunkExtra), ChunkExtra >= 0 ->
Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}),
try
append_chunk2(Sock, EpochID, Prefix, Chunk, ChunkExtra)
after after
disconnect(Sock) disconnect(Sock)
end. end.
%% @doc Read a chunk of data of size `Size' from `File' at `Offset'. %% @doc Read a chunk of data of size `Size' from `File' at `Offset'.
-spec read_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), -spec read_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size()) ->
machi_dt:read_opts_x()) -> {ok, machi_dt:chunk_s()} |
{ok, {[machi_dt:chunk_summary()], [machi_dt:chunk_pos()]}} |
{error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} |
{error, term()}. {error, term()}.
read_chunk(Sock, NSInfo0, EpochID, File, Offset, Size, Opts0) read_chunk(Sock, EpochID, File, Offset, Size)
when Offset >= ?MINIMUM_OFFSET, Size >= 0 -> when Offset >= ?MINIMUM_OFFSET, Size >= 0 ->
NSInfo = machi_util:ns_info_default(NSInfo0), read_chunk2(Sock, EpochID, File, Offset, Size).
Opts = machi_util:read_opts_default(Opts0),
read_chunk2(Sock, NSInfo, EpochID, File, Offset, Size, Opts).
%% @doc Read a chunk of data of size `Size' from `File' at `Offset'. %% @doc Read a chunk of data of size `Size' from `File' at `Offset'.
-spec read_chunk(machi_dt:inet_host(), machi_dt:inet_port(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), -spec read_chunk(machi_dt:inet_host(), machi_dt:inet_port(), machi_dt:epoch_id(),
machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size()) ->
machi_dt:read_opts_x()) -> {ok, machi_dt:chunk_s()} |
{ok, [machi_dt:chunk_summary()]} |
{error, machi_dt:error_general() | 'not_written' | 'partial_read'} | {error, machi_dt:error_general() | 'not_written' | 'partial_read'} |
{error, term()}. {error, term()}.
read_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Size, Opts0) read_chunk(Host, TcpPort, EpochID, File, Offset, Size)
when Offset >= ?MINIMUM_OFFSET, Size >= 0 -> when Offset >= ?MINIMUM_OFFSET, Size >= 0 ->
Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}),
NSInfo = machi_util:ns_info_default(NSInfo0),
Opts = machi_util:read_opts_default(Opts0),
try try
read_chunk2(Sock, NSInfo, EpochID, File, Offset, Size, Opts) read_chunk2(Sock, EpochID, File, Offset, Size)
after after
disconnect(Sock) disconnect(Sock)
end. end.
%% @doc Fetch the list of chunk checksums for `File'. %% @doc Fetch the list of chunk checksums for `File'.
-spec checksum_list(port_wrap(), machi_dt:file_name()) -> -spec checksum_list(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name()) ->
{ok, binary()} | {ok, binary()} |
{error, machi_dt:error_general() | 'no_such_file' | 'partial_read'} | {error, machi_dt:error_general() | 'no_such_file' | 'partial_read'} |
{error, term()}. {error, term()}.
checksum_list(Sock, File) -> checksum_list(Sock, EpochID, File) ->
checksum_list2(Sock, File). checksum_list2(Sock, EpochID, File).
%% @doc Fetch the list of chunk checksums for `File'. %% @doc Fetch the list of chunk checksums for `File'.
%% %%
@ -275,13 +199,13 @@ checksum_list(Sock, File) ->
%% Details of the encoding used inside the `binary()' blog can be found %% Details of the encoding used inside the `binary()' blog can be found
%% in the EDoc comments for {@link machi_flu1:decode_csum_file_entry/1}. %% in the EDoc comments for {@link machi_flu1:decode_csum_file_entry/1}.
-spec checksum_list(machi_dt:inet_host(), machi_dt:inet_port(), machi_dt:file_name()) -> -spec checksum_list(machi_dt:inet_host(), machi_dt:inet_port(), machi_dt:epoch_id(), machi_dt:file_name()) ->
{ok, binary()} | {ok, binary()} |
{error, machi_dt:error_general() | 'no_such_file'} | {error, term()}. {error, machi_dt:error_general() | 'no_such_file'} | {error, term()}.
checksum_list(Host, TcpPort, File) when is_integer(TcpPort) -> checksum_list(Host, TcpPort, EpochID, File) when is_integer(TcpPort) ->
Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}),
try try
checksum_list2(Sock, File) checksum_list2(Sock, EpochID, File)
after after
disconnect(Sock) disconnect(Sock)
end. end.
@ -308,7 +232,7 @@ list_files(Host, TcpPort, EpochID) when is_integer(TcpPort) ->
%% @doc Fetch the wedge status from the remote FLU. %% @doc Fetch the wedge status from the remote FLU.
-spec wedge_status(port_wrap()) -> -spec wedge_status(port_wrap()) ->
{ok, {boolean(), machi_dt:epoch_id(), machi_dt:namespace_version(),machi_dt:namespace()}} | {error, term()}. {ok, {boolean(), machi_dt:epoch_id()}} | {error, term()}.
wedge_status(Sock) -> wedge_status(Sock) ->
wedge_status2(Sock). wedge_status2(Sock).
@ -316,7 +240,7 @@ wedge_status(Sock) ->
%% @doc Fetch the wedge status from the remote FLU. %% @doc Fetch the wedge status from the remote FLU.
-spec wedge_status(machi_dt:inet_host(), machi_dt:inet_port()) -> -spec wedge_status(machi_dt:inet_host(), machi_dt:inet_port()) ->
{ok, {boolean(), machi_dt:epoch_id(), machi_dt:namespace_version(),machi_dt:namespace()}} | {error, term()}. {ok, {boolean(), machi_dt:epoch_id()}} | {error, term()}.
wedge_status(Host, TcpPort) when is_integer(TcpPort) -> wedge_status(Host, TcpPort) when is_integer(TcpPort) ->
Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}),
try try
@ -527,46 +451,27 @@ disconnect(_) ->
%% @doc Restricted API: Write a chunk of already-sequenced data to %% @doc Restricted API: Write a chunk of already-sequenced data to
%% `File' at `Offset'. %% `File' at `Offset'.
-spec write_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk(), machi_dt:chunk_csum()) -> -spec write_chunk(port_wrap(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) ->
ok | {error, machi_dt:error_general()} | {error, term()}. ok | {error, machi_dt:error_general()} | {error, term()}.
write_chunk(Sock, NSInfo0, EpochID, File, Offset, Chunk, CSum) write_chunk(Sock, EpochID, File, Offset, Chunk)
when Offset >= ?MINIMUM_OFFSET -> when Offset >= ?MINIMUM_OFFSET ->
NSInfo = machi_util:ns_info_default(NSInfo0), write_chunk2(Sock, EpochID, File, Offset, Chunk).
write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk, CSum).
%% @doc Restricted API: Write a chunk of already-sequenced data to %% @doc Restricted API: Write a chunk of already-sequenced data to
%% `File' at `Offset'. %% `File' at `Offset'.
-spec write_chunk(machi_dt:inet_host(), machi_dt:inet_port(), -spec write_chunk(machi_dt:inet_host(), machi_dt:inet_port(),
'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk(), machi_dt:chunk_csum()) -> machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk()) ->
ok | {error, machi_dt:error_general()} | {error, term()}. ok | {error, machi_dt:error_general()} | {error, term()}.
write_chunk(Host, TcpPort, NSInfo0, EpochID, File, Offset, Chunk, CSum) write_chunk(Host, TcpPort, EpochID, File, Offset, Chunk)
when Offset >= ?MINIMUM_OFFSET -> when Offset >= ?MINIMUM_OFFSET ->
Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}), Sock = connect(#p_srvr{proto_mod=?MODULE, address=Host, port=TcpPort}),
try try
NSInfo = machi_util:ns_info_default(NSInfo0), write_chunk2(Sock, EpochID, File, Offset, Chunk)
write_chunk2(Sock, NSInfo, EpochID, File, Offset, Chunk, CSum)
after after
disconnect(Sock) disconnect(Sock)
end. end.
%% @doc Restricted API: Write a chunk of already-sequenced data to
%% `File' at `Offset'.
-spec trim_chunk(port_wrap(), 'undefined' | machi_dt:ns_info(), machi_dt:epoch_id(), machi_dt:file_name(), machi_dt:file_offset(), machi_dt:chunk_size()) ->
ok | {error, machi_dt:error_general()} | {error, term()}.
trim_chunk(Sock, NSInfo0, EpochID, File0, Offset, Size)
when Offset >= ?MINIMUM_OFFSET ->
ReqID = <<"id">>,
NSInfo = machi_util:ns_info_default(NSInfo0),
#ns_info{version=NSVersion, name=NS} = NSInfo,
File = machi_util:make_binary(File0),
true = (Offset >= ?MINIMUM_OFFSET),
Req = machi_pb_translate:to_pb_request(
ReqID,
{low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, 0}),
do_pb_request_common(Sock, ReqID, Req).
%% @doc Restricted API: Delete a file after it has been successfully %% @doc Restricted API: Delete a file after it has been successfully
%% migrated. %% migrated.
@ -611,88 +516,83 @@ trunc_hack(Host, TcpPort, EpochID, File) when is_integer(TcpPort) ->
%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%
read_chunk2(Sock, NSInfo, EpochID, File0, Offset, Size, Opts) -> read_chunk2(Sock, EpochID, File0, Offset, Size) ->
ReqID = <<"id">>, ReqID = <<"id">>,
#ns_info{version=NSVersion, name=NS} = NSInfo,
File = machi_util:make_binary(File0), File = machi_util:make_binary(File0),
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, ReqID,
{low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}), {low_read_chunk, EpochID, File, Offset, Size, []}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
append_chunk2(Sock, NSInfo, EpochID, append_chunk2(Sock, EpochID, Prefix0, Chunk0, ChunkExtra) ->
Prefix0, Chunk, CSum0, Opts, Timeout) ->
ReqID = <<"id">>, ReqID = <<"id">>,
Prefix = machi_util:make_binary(Prefix0), {Chunk, CSum_tag, CSum} =
{CSum_tag, CSum} = case CSum0 of case Chunk0 of
<<>> -> X when is_binary(X) ->
{?CSUM_TAG_NONE, <<>>}; {Chunk0, ?CSUM_TAG_NONE, <<>>};
{_Tag, _CS} -> {ChunkCSum, Chk} ->
CSum0; {Tag, CS} = machi_util:unmake_tagged_csum(ChunkCSum),
B when is_binary(B) -> {Chk, Tag, CS}
machi_util:unmake_tagged_csum(CSum0)
end, end,
#ns_info{version=NSVersion, name=NS, locator=NSLocator} = NSInfo, PKey = <<>>, % TODO
%% NOTE: The tuple position of NSLocator is a bit odd, because EpochID Prefix = machi_util:make_binary(Prefix0),
%% _must_ be in the 4th position (as NSV & NS must be in 2nd & 3rd).
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, ReqID,
{low_append_chunk, NSVersion, NS, EpochID, NSLocator, {low_append_chunk, EpochID, PKey, Prefix, Chunk, CSum_tag, CSum,
Prefix, Chunk, CSum_tag, CSum, Opts}), ChunkExtra}),
do_pb_request_common(Sock, ReqID, Req, true, Timeout). do_pb_request_common(Sock, ReqID, Req).
write_chunk2(Sock, NSInfo, EpochID, File0, Offset, Chunk, CSum0) -> write_chunk2(Sock, EpochID, File0, Offset, Chunk0) ->
ReqID = <<"id">>, ReqID = <<"id">>,
#ns_info{version=NSVersion, name=NS} = NSInfo,
File = machi_util:make_binary(File0), File = machi_util:make_binary(File0),
true = (Offset >= ?MINIMUM_OFFSET), true = (Offset >= ?MINIMUM_OFFSET),
{CSum_tag, CSum} = case CSum0 of {Chunk, CSum_tag, CSum} =
<<>> -> case Chunk0 of
{?CSUM_TAG_NONE, <<>>}; X when is_binary(X) ->
{_Tag, _CS} -> {Chunk0, ?CSUM_TAG_NONE, <<>>};
CSum0; {ChunkCSum, Chk} ->
B when is_binary(B) -> {Tag, CS} = machi_util:unmake_tagged_csum(ChunkCSum),
machi_util:unmake_tagged_csum(CSum0) {Chk, Tag, CS}
end, end,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, ReqID,
{low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}), {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
list2(Sock, EpochID) -> list2(Sock, EpochID) ->
ReqID = <<"id">>, ReqID = <<"id">>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_skip_wedge, {low_list_files, EpochID}}), ReqID, {low_list_files, EpochID}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
wedge_status2(Sock) -> wedge_status2(Sock) ->
ReqID = <<"id">>, ReqID = <<"id">>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_skip_wedge, {low_wedge_status}}), ReqID, {low_wedge_status, undefined}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
echo2(Sock, Message) -> echo2(Sock, Message) ->
ReqID = <<"id">>, ReqID = <<"id">>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_skip_wedge, {low_echo, Message}}), ReqID, {low_echo, undefined, Message}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
checksum_list2(Sock, File) -> checksum_list2(Sock, EpochID, File) ->
ReqID = <<"id">>, ReqID = <<"id">>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_skip_wedge, {low_checksum_list, File}}), ReqID, {low_checksum_list, EpochID, File}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
delete_migration2(Sock, EpochID, File) -> delete_migration2(Sock, EpochID, File) ->
ReqID = <<"id">>, ReqID = <<"id">>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_skip_wedge, {low_delete_migration, EpochID, File}}), ReqID, {low_delete_migration, EpochID, File}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
trunc_hack2(Sock, EpochID, File) -> trunc_hack2(Sock, EpochID, File) ->
ReqID = <<"id-trunc">>, ReqID = <<"id-trunc">>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_skip_wedge, {low_trunc_hack, EpochID, File}}), ReqID, {low_trunc_hack, EpochID, File}),
do_pb_request_common(Sock, ReqID, Req). do_pb_request_common(Sock, ReqID, Req).
get_latest_epochid2(Sock, ProjType) -> get_latest_epochid2(Sock, ProjType) ->
@ -735,18 +635,18 @@ kick_projection_reaction2(Sock, _Options) ->
ReqID = <<42>>, ReqID = <<42>>,
Req = machi_pb_translate:to_pb_request( Req = machi_pb_translate:to_pb_request(
ReqID, {low_proj, {kick_projection_reaction}}), ReqID, {low_proj, {kick_projection_reaction}}),
do_pb_request_common(Sock, ReqID, Req, false, ?LONG_TIMEOUT). do_pb_request_common(Sock, ReqID, Req, false).
do_pb_request_common(Sock, ReqID, Req) -> do_pb_request_common(Sock, ReqID, Req) ->
do_pb_request_common(Sock, ReqID, Req, true, ?LONG_TIMEOUT). do_pb_request_common(Sock, ReqID, Req, true).
do_pb_request_common(Sock, ReqID, Req, GetReply_p, Timeout) -> do_pb_request_common(Sock, ReqID, Req, GetReply_p) ->
erase(bad_sock), erase(bad_sock),
try try
ReqBin = list_to_binary(machi_pb:encode_mpb_ll_request(Req)), ReqBin = list_to_binary(machi_pb:encode_mpb_ll_request(Req)),
ok = w_send(Sock, ReqBin), ok = w_send(Sock, ReqBin),
if GetReply_p -> if GetReply_p ->
case w_recv(Sock, 0, Timeout) of case w_recv(Sock, 0) of
{ok, RespBin} -> {ok, RespBin} ->
Resp = machi_pb:decode_mpb_ll_response(RespBin), Resp = machi_pb:decode_mpb_ll_response(RespBin),
{ReqID2, Reply} = machi_pb_translate:from_pb_response(Resp), {ReqID2, Reply} = machi_pb_translate:from_pb_response(Resp),
@ -792,7 +692,7 @@ w_connect(#p_srvr{proto_mod=?MODULE, address=Host, port=Port, props=Props}=_P)->
case proplists:get_value(session_proto, Props, tcp) of case proplists:get_value(session_proto, Props, tcp) of
tcp -> tcp ->
put(xxx, goofus), put(xxx, goofus),
Sock = machi_util:connect(Host, Port, ?SHORT_TIMEOUT), Sock = machi_util:connect(Host, Port, ?HARD_TIMEOUT),
put(xxx, Sock), put(xxx, Sock),
ok = inet:setopts(Sock, ?PB_PACKET_OPTS), ok = inet:setopts(Sock, ?PB_PACKET_OPTS),
{w,tcp,Sock}; {w,tcp,Sock};
@ -816,8 +716,8 @@ w_close({w,tcp,Sock}) ->
catch gen_tcp:close(Sock), catch gen_tcp:close(Sock),
ok. ok.
w_recv({w,tcp,Sock}, Amt, Timeout) -> w_recv({w,tcp,Sock}, Amt) ->
gen_tcp:recv(Sock, Amt, Timeout). gen_tcp:recv(Sock, Amt, ?HARD_TIMEOUT).
w_send({w,tcp,Sock}, IoData) -> w_send({w,tcp,Sock}, IoData) ->
gen_tcp:send(Sock, IoData). gen_tcp:send(Sock, IoData).

View file

@ -1,634 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc Ranch protocol callback module to handle PB protocol over
%% transport, including both high and low modes.
%% TODO
%% - Two modes, high and low should be separated at listener level?
-module(machi_flu1_net_server).
-behaviour(gen_server).
-behaviour(ranch_protocol).
-export([start_link/4]).
-export([init/1]).
-export([handle_call/3, handle_cast/2, handle_info/2,
terminate/2, code_change/3]).
-include_lib("kernel/include/file.hrl").
-include("machi.hrl").
-include("machi_pb.hrl").
-include("machi_projection.hrl").
-ifdef(TEST).
-include_lib("eunit/include/eunit.hrl").
-endif. % TEST
-record(state, {
%% Ranch's transport management stuff
ref :: ranch:ref(),
socket :: socket(),
transport :: module(),
%% Machi FLU configurations, common for low and high
data_dir :: string(),
witness :: boolean(),
pb_mode :: undefined | high | low,
%% - Used in projection related requests in low mode
%% - Used in spawning CR client in high mode
proj_store :: pid(),
%% Low mode only items
%% Current best knowledge, used for wedge_self / bad_epoch check
epoch_id :: undefined | machi_dt:epoch_id(),
%% Used in dispatching append_chunk* reqs to the
%% append serializing process
flu_name :: pv1_server(),
%% Used in server_wedge_status to lookup the table
epoch_tab :: ets:tab(),
%% Clustering: cluster map version number
namespace_version = 0 :: machi_dt:namespace_version(),
%% Clustering: my (and my chain's) assignment to a specific namespace
namespace = <<>> :: machi_dt:namespace(),
%% High mode only
high_clnt :: pid(),
%% anything you want
props = [] :: proplists:proplist()
}).
-type socket() :: any().
-type state() :: #state{}.
-spec start_link(ranch:ref(), socket(), module(), [term()]) -> {ok, pid()}.
start_link(Ref, Socket, Transport, [FluName, Witness, DataDir, EpochTab, ProjStore, Props]) ->
NS = proplists:get_value(namespace, Props, <<>>),
true = is_binary(NS),
proc_lib:start_link(?MODULE, init, [#state{ref=Ref,
socket=Socket,
transport=Transport,
flu_name=FluName,
witness=Witness,
data_dir=DataDir,
epoch_tab=EpochTab,
proj_store=ProjStore,
namespace=NS,
props=Props}]).
-spec init(state()) -> no_return().
init(#state{ref=Ref, socket=Socket, transport=Transport}=State) ->
ok = proc_lib:init_ack({ok, self()}),
ok = ranch:accept_ack(Ref),
{_Wedged_p, CurrentEpochID} = lookup_epoch(State),
ok = Transport:setopts(Socket, [{active, once}|?PB_PACKET_OPTS]),
gen_server:enter_loop(?MODULE, [], State#state{epoch_id=CurrentEpochID}).
handle_call(Request, _From, S) ->
lager:warning("~s:handle_call UNKNOWN message: ~w", [?MODULE, Request]),
Reply = {error, {unknown_message, Request}},
{reply, Reply, S}.
handle_cast(_Msg, S) ->
lager:warning("~s:handle_cast UNKNOWN message: ~w", [?MODULE, _Msg]),
{noreply, S}.
%% TODO: Other transport support needed?? TLS/SSL, SCTP
handle_info({tcp, Socket, Data}=_Info, #state{socket=Socket}=S) ->
lager:debug("~s:handle_info: ~w", [?MODULE, _Info]),
transport_received(Socket, Data, S);
handle_info({tcp_closed, Socket}=_Info, #state{socket=Socket}=S) ->
lager:debug("~s:handle_info: ~w", [?MODULE, _Info]),
transport_closed(Socket, S);
handle_info({tcp_error, Socket, Reason}=_Info, #state{socket=Socket}=S) ->
lager:warning("~s:handle_info (socket=~w) tcp_error: ~w", [?MODULE, Socket, Reason]),
transport_error(Socket, Reason, S);
handle_info(_Info, S) ->
lager:warning("~s:handle_info UNKNOWN message: ~w", [?MODULE, _Info]),
{noreply, S}.
terminate(normal, #state{socket=undefined}=_S) ->
ok;
terminate(Reason, #state{socket=undefined}=_S) ->
lager:warning("~s:terminate (socket=undefined): ~w", [?MODULE, Reason]),
ok;
terminate(normal, #state{socket=Socket}=_S) ->
(catch gen_tcp:close(Socket)),
ok;
terminate(Reason, #state{socket=Socket}=_S) ->
lager:warning("~s:terminate (socket=Socket): ~w", [?MODULE, Reason]),
(catch gen_tcp:close(Socket)),
ok.
code_change(_OldVsn, S, _Extra) ->
{ok, S}.
%% -- private
%%%% Common transport handling
-spec transport_received(socket(), machi_dt:chunk(), state()) ->
{noreply, state()}.
transport_received(Socket, <<"QUIT\n">>, #state{socket=Socket}=S) ->
{stop, normal, S};
transport_received(Socket, Bin, #state{transport=Transport}=S) ->
{RespBin, S2} =
case machi_pb:decode_mpb_ll_request(Bin) of
LL_req when LL_req#mpb_ll_request.do_not_alter == 2 ->
{R, NewS} = do_pb_ll_request(LL_req, S),
{maybe_encode_response(R), set_mode(low, NewS)};
_ ->
HL_req = machi_pb:decode_mpb_request(Bin),
1 = HL_req#mpb_request.do_not_alter,
{R, NewS} = do_pb_hl_request(HL_req, make_high_clnt(S)),
{machi_pb:encode_mpb_response(R), set_mode(high, NewS)}
end,
case RespBin of
async_no_response ->
Transport:setopts(Socket, [{active, once}]),
{noreply, S2};
_ ->
case Transport:send(Socket, RespBin) of
ok ->
Transport:setopts(Socket, [{active, once}]),
{noreply, S2};
{error, Reason} ->
transport_error(Socket, Reason, S2)
end
end.
-spec transport_closed(socket(), state()) -> {stop, term(), state()}.
transport_closed(_Socket, S) ->
{stop, normal, S}.
-spec transport_error(socket(), term(), state()) -> no_return().
transport_error(Socket, Reason, #state{transport=Transport}=_S) ->
Msg = io_lib:format("Socket error ~w", [Reason]),
R = #mpb_ll_response{req_id= <<>>,
generic=#mpb_errorresp{code=1, msg=Msg}},
_Resp = machi_pb:encode_mpb_ll_response(R),
%% TODO for TODO comments: comments below with four %s are copy-n-paste'd,
%% then it should be considered they are still open and should be addressed.
%%%% TODO: Weird that sometimes neither catch nor try/catch
%%%% can prevent OTP's SASL from logging an error here.
%%%% Error in process <0.545.0> with exit value: {badarg,[{erlang,port_command,.......
%%%% TODO: is this what causes the intermittent PULSE deadlock errors?
%%%% _ = (catch gen_tcp:send(Sock, _Resp)), timer:sleep(1000),
(catch Transport:close(Socket)),
_ = lager:warning("Socket error (~w -> ~w): ~w",
[Transport:sockname(Socket), Transport:peername(Socket), Reason]),
%% TODO: better to exit with `Reason' without logging?
exit(normal).
maybe_encode_response(async_no_response=R) ->
R;
maybe_encode_response(R) ->
machi_pb:encode_mpb_ll_response(R).
set_mode(Mode, #state{pb_mode=undefined}=S) ->
S#state{pb_mode=Mode};
set_mode(_, S) ->
S.
%%%% Low PB mode %%%%
do_pb_ll_request(#mpb_ll_request{req_id=ReqID}, #state{pb_mode=high}=S) ->
Result = {high_error, 41, "Low protocol request while in high mode"},
{machi_pb_translate:to_pb_response(ReqID, unused, Result), S};
do_pb_ll_request(PB_request, S) ->
Req = machi_pb_translate:from_pb_request(PB_request),
{ReqID, Cmd, Result, S2} =
case Req of
{RqID, {low_skip_wedge, LowSubCmd}=Cmd0} ->
%% Skip wedge check for these unprivileged commands
{Rs, NewS} = do_pb_ll_request3(LowSubCmd, S),
{RqID, Cmd0, Rs, NewS};
{RqID, {low_proj, _LowSubCmd}=Cmd0} ->
{Rs, NewS} = do_pb_ll_request3(Cmd0, S),
{RqID, Cmd0, Rs, NewS};
{RqID, Cmd0} ->
%% All remaining must have NSVersion, NS, & EpochID at next pos
NSVersion = element(2, Cmd0),
NS = element(3, Cmd0),
EpochID = element(4, Cmd0),
{Rs, NewS} = do_pb_ll_request2(NSVersion, NS, EpochID, Cmd0, S),
{RqID, Cmd0, Rs, NewS}
end,
{machi_pb_translate:to_pb_response(ReqID, Cmd, Result), S2}.
%% do_pb_ll_request2(): Verification of epoch details & namespace details.
do_pb_ll_request2(NSVersion, NS, EpochID, CMD, S) ->
{Wedged_p, CurrentEpochID} = lookup_epoch(S),
if not is_tuple(EpochID) orelse tuple_size(EpochID) /= 2 ->
exit({bad_epoch_id, EpochID, for, CMD});
Wedged_p == true ->
{{error, wedged}, S#state{epoch_id=CurrentEpochID}};
EpochID /= CurrentEpochID ->
{Epoch, _} = EpochID,
{CurrentEpoch, _} = CurrentEpochID,
if Epoch < CurrentEpoch ->
{{error, bad_epoch}, S};
true ->
_ = machi_flu1:wedge_myself(S#state.flu_name, CurrentEpochID),
{{error, wedged}, S#state{epoch_id=CurrentEpochID}}
end;
true ->
#state{namespace_version=MyNSVersion, namespace=MyNS} = S,
if NSVersion /= MyNSVersion ->
{{error, bad_epoch}, S};
NS /= MyNS ->
{{error, bad_arg}, S};
true ->
do_pb_ll_request3(CMD, S)
end
end.
lookup_epoch(#state{epoch_tab=T}) ->
%% TODO: race in shutdown to access ets table after owner dies
ets:lookup_element(T, epoch, 2).
%% Witness status does not matter below.
do_pb_ll_request3({low_echo, Msg}, S) ->
{Msg, S};
do_pb_ll_request3({low_auth, _User, _Pass}, S) ->
{-6, S};
do_pb_ll_request3({low_wedge_status}, S) ->
{do_server_wedge_status(S), S};
do_pb_ll_request3({low_proj, PCMD}, S) ->
{do_server_proj_request(PCMD, S), S};
%% Witness status *matters* below
do_pb_ll_request3({low_append_chunk, NSVersion, NS, EpochID, NSLocator,
Prefix, Chunk, CSum_tag,
CSum, Opts},
#state{witness=false}=S) ->
NSInfo = #ns_info{version=NSVersion, name=NS, locator=NSLocator},
{do_server_append_chunk(NSInfo, EpochID,
Prefix, Chunk, CSum_tag, CSum,
Opts, S), S};
do_pb_ll_request3({low_write_chunk, _NSVersion, _NS, _EpochID, File, Offset, Chunk, CSum_tag,
CSum},
#state{witness=false}=S) ->
{do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, S), S};
do_pb_ll_request3({low_read_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size, Opts},
#state{witness=false} = S) ->
{do_server_read_chunk(File, Offset, Size, Opts, S), S};
do_pb_ll_request3({low_trim_chunk, _NSVersion, _NS, _EpochID, File, Offset, Size, TriggerGC},
#state{witness=false}=S) ->
{do_server_trim_chunk(File, Offset, Size, TriggerGC, S), S};
do_pb_ll_request3({low_checksum_list, File},
#state{witness=false}=S) ->
{do_server_checksum_listing(File, S), S};
do_pb_ll_request3({low_list_files, _EpochID},
#state{witness=false}=S) ->
{do_server_list_files(S), S};
do_pb_ll_request3({low_delete_migration, _EpochID, File},
#state{witness=false}=S) ->
{do_server_delete_migration(File, S),
#state{witness=false}=S};
do_pb_ll_request3({low_trunc_hack, _EpochID, File},
#state{witness=false}=S) ->
{do_server_trunc_hack(File, S), S};
do_pb_ll_request3(_, #state{witness=true}=S) ->
{{error, bad_arg}, S}. % TODO: new status code??
do_server_proj_request({get_latest_epochid, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:get_latest_epochid(ProjStore, ProjType);
do_server_proj_request({read_latest_projection, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:read_latest_projection(ProjStore, ProjType);
do_server_proj_request({read_projection, ProjType, Epoch},
#state{proj_store=ProjStore}) ->
machi_projection_store:read(ProjStore, ProjType, Epoch);
do_server_proj_request({write_projection, ProjType, Proj},
#state{flu_name=FluName, proj_store=ProjStore}) ->
if Proj#projection_v1.epoch_number == ?SPAM_PROJ_EPOCH ->
%% io:format(user, "DBG ~s ~w ~P\n", [?MODULE, ?LINE, Proj, 5]),
Chmgr = machi_flu_psup:make_fitness_regname(FluName),
[Map] = Proj#projection_v1.dbg,
catch machi_fitness:send_fitness_update_spam(
Chmgr, Proj#projection_v1.author_server, Map);
true ->
catch machi_projection_store:write(ProjStore, ProjType, Proj)
end;
do_server_proj_request({get_all_projections, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:get_all_projections(ProjStore, ProjType);
do_server_proj_request({list_all_projections, ProjType},
#state{proj_store=ProjStore}) ->
machi_projection_store:list_all_projections(ProjStore, ProjType);
do_server_proj_request({kick_projection_reaction},
#state{flu_name=FluName}) ->
%% Tell my chain manager that it might want to react to
%% this new world.
Chmgr = machi_chain_manager1:make_chmgr_regname(FluName),
spawn(fun() ->
catch machi_chain_manager1:trigger_react_to_env(Chmgr)
end),
async_no_response.
do_server_append_chunk(NSInfo, EpochID,
Prefix, Chunk, CSum_tag, CSum,
Opts, S) ->
case sanitize_prefix(Prefix) of
ok ->
do_server_append_chunk2(NSInfo, EpochID,
Prefix, Chunk, CSum_tag, CSum,
Opts, S);
_ ->
{error, bad_arg}
end.
do_server_append_chunk2(NSInfo, EpochID,
Prefix, Chunk, CSum_tag, Client_CSum,
Opts, #state{flu_name=FluName,
epoch_id=EpochID}=_S) ->
%% TODO: Do anything with PKey?
try
TaggedCSum = check_or_make_tagged_checksum(CSum_tag, Client_CSum,Chunk),
R = {seq_append, self(), NSInfo, EpochID,
Prefix, Chunk, TaggedCSum, Opts},
case gen_server:call(FluName, R, 10*1000) of
{assignment, Offset, File} ->
Size = iolist_size(Chunk),
{ok, {Offset, Size, File}};
witness ->
{error, bad_arg};
wedged ->
{error, wedged};
{error, timeout} ->
{error, partition}
end
catch
throw:{bad_csum, _CS} ->
{error, bad_checksum};
error:badarg ->
lager:error("badarg at ~w:do_server_append_chunk2:~w ~w",
[?MODULE, ?LINE, erlang:get_stacktrace()]),
{error, bad_arg}
end.
do_server_write_chunk(File, Offset, Chunk, CSum_tag, CSum, #state{flu_name=FluName}) ->
case sanitize_file_string(File) of
ok ->
case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}) of
{ok, Pid} ->
Meta = [{client_csum_tag, CSum_tag}, {client_csum, CSum}],
machi_file_proxy:write(Pid, Offset, Meta, Chunk);
{error, trimmed} = Error ->
Error
end;
_ ->
{error, bad_arg}
end.
do_server_read_chunk(File, Offset, Size, Opts, #state{flu_name=FluName})->
case sanitize_file_string(File) of
ok ->
case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}) of
{ok, Pid} ->
case machi_file_proxy:read(Pid, Offset, Size, Opts) of
%% XXX FIXME
%% For now we are omiting the checksum data because it blows up
%% protobufs.
{ok, ChunksAndTrimmed} -> {ok, ChunksAndTrimmed};
Other -> Other
end;
{error, trimmed} = Error ->
Error
end;
_ ->
{error, bad_arg}
end.
do_server_trim_chunk(File, Offset, Size, TriggerGC, #state{flu_name=FluName}) ->
lager:debug("Hi there! I'm trimming this: ~s, (~p, ~p), ~p~n",
[File, Offset, Size, TriggerGC]),
case sanitize_file_string(File) of
ok ->
case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}) of
{ok, Pid} ->
machi_file_proxy:trim(Pid, Offset, Size, TriggerGC);
{error, trimmed} = Trimmed ->
%% Should be returned back to (maybe) trigger repair
Trimmed
end;
_ ->
{error, bad_arg}
end.
do_server_checksum_listing(File, #state{flu_name=FluName, data_dir=DataDir}=_S) ->
case sanitize_file_string(File) of
ok ->
case machi_flu_metadata_mgr:start_proxy_pid(FluName, {file, File}) of
{ok, Pid} ->
{ok, List} = machi_file_proxy:checksum_list(Pid),
Bin = erlang:term_to_binary(List),
if byte_size(Bin) > (?PB_MAX_MSG_SIZE - 1024) ->
%% TODO: Fix this limitation by streaming the
%% binary in multiple smaller PB messages.
%% Also, don't read the file all at once. ^_^
error_logger:error_msg("~s:~w oversize ~s\n",
[?MODULE, ?LINE, DataDir]),
{error, bad_arg};
true ->
{ok, Bin}
end;
{error, trimmed} ->
{error, trimmed}
end;
_ ->
{error, bad_arg}
end.
do_server_list_files(#state{data_dir=DataDir}=_S) ->
{_, WildPath} = machi_util:make_data_filename(DataDir, ""),
Files = filelib:wildcard("*", WildPath),
{ok, [begin
{ok, FI} = file:read_file_info(WildPath ++ "/" ++ File),
Size = FI#file_info.size,
{Size, File}
end || File <- Files]}.
do_server_wedge_status(#state{namespace_version=NSVersion, namespace=NS}=S) ->
{Wedged_p, CurrentEpochID0} = lookup_epoch(S),
CurrentEpochID = if CurrentEpochID0 == undefined ->
?DUMMY_PV1_EPOCH;
true ->
CurrentEpochID0
end,
{Wedged_p, CurrentEpochID, NSVersion, NS}.
do_server_delete_migration(File, #state{data_dir=DataDir}=_S) ->
case sanitize_file_string(File) of
ok ->
{_, Path} = machi_util:make_data_filename(DataDir, File),
case file:delete(Path) of
ok ->
ok;
{error, enoent} ->
{error, no_such_file};
_ ->
{error, bad_arg}
end;
_ ->
{error, bad_arg}
end.
do_server_trunc_hack(File, #state{data_dir=DataDir}=_S) ->
case sanitize_file_string(File) of
ok ->
{_, Path} = machi_util:make_data_filename(DataDir, File),
case file:open(Path, [read, write, binary, raw]) of
{ok, FH} ->
try
{ok, ?MINIMUM_OFFSET} = file:position(FH,
?MINIMUM_OFFSET),
ok = file:truncate(FH),
ok
after
file:close(FH)
end;
{error, enoent} ->
{error, no_such_file};
_ ->
{error, bad_arg}
end;
_ ->
{error, bad_arg}
end.
sanitize_file_string(Str) ->
case has_no_prohibited_chars(Str) andalso machi_util:is_valid_filename(Str) of
true -> ok;
false -> error
end.
has_no_prohibited_chars(Str) ->
case re:run(Str, "/") of
nomatch ->
true;
_ ->
true
end.
sanitize_prefix(Prefix) ->
%% We are using '^' as our component delimiter
case re:run(Prefix, "/|\\^") of
nomatch ->
ok;
_ ->
error
end.
check_or_make_tagged_checksum(?CSUM_TAG_NONE, _Client_CSum, Chunk) ->
%% TODO: If the client was foolish enough to use
%% this type of non-checksum, then the client gets
%% what it deserves wrt data integrity, alas. In
%% the client-side Chain Replication method, each
%% server will calculated this independently, which
%% isn't exactly what ought to happen for best data
%% integrity checking. In server-side CR, the csum
%% should be calculated by the head and passed down
%% the chain together with the value.
CS = machi_util:checksum_chunk(Chunk),
machi_util:make_tagged_csum(server_sha, CS);
check_or_make_tagged_checksum(?CSUM_TAG_CLIENT_SHA, Client_CSum, Chunk) ->
CS = machi_util:checksum_chunk(Chunk),
if CS == Client_CSum ->
machi_util:make_tagged_csum(server_sha,
Client_CSum);
true ->
throw({bad_csum, CS})
end.
%%%% High PB mode %%%%
do_pb_hl_request(#mpb_request{req_id=ReqID}, #state{pb_mode=low}=S) ->
Result = {low_error, 41, "High protocol request while in low mode"},
{machi_pb_translate:to_pb_response(ReqID, unused, Result), S};
do_pb_hl_request(PB_request, S) ->
{ReqID, Cmd} = machi_pb_translate:from_pb_request(PB_request),
{Result, S2} = do_pb_hl_request2(Cmd, S),
{machi_pb_translate:to_pb_response(ReqID, Cmd, Result), S2}.
do_pb_hl_request2({high_echo, Msg}, S) ->
{Msg, S};
do_pb_hl_request2({high_auth, _User, _Pass}, S) ->
{-77, S};
do_pb_hl_request2({high_append_chunk=Op, NS, Prefix, Chunk, TaggedCSum, Opts},
#state{high_clnt=Clnt}=S) ->
NSInfo = #ns_info{name=NS}, % TODO populate other fields
todo_perhaps_remind_ns_locator_not_chosen(Op),
Res = machi_cr_client:append_chunk(Clnt, NSInfo,
Prefix, Chunk, TaggedCSum, Opts),
{Res, S};
do_pb_hl_request2({high_write_chunk=Op, File, Offset, Chunk, CSum},
#state{high_clnt=Clnt}=S) ->
NSInfo = undefined,
todo_perhaps_remind_ns_locator_not_chosen(Op),
Res = machi_cr_client:write_chunk(Clnt, NSInfo, File, Offset, Chunk, CSum),
{Res, S};
do_pb_hl_request2({high_read_chunk=Op, File, Offset, Size, Opts},
#state{high_clnt=Clnt}=S) ->
NSInfo = undefined,
todo_perhaps_remind_ns_locator_not_chosen(Op),
Res = machi_cr_client:read_chunk(Clnt, NSInfo, File, Offset, Size, Opts),
{Res, S};
do_pb_hl_request2({high_trim_chunk=Op, File, Offset, Size},
#state{high_clnt=Clnt}=S) ->
NSInfo = undefined,
todo_perhaps_remind_ns_locator_not_chosen(Op),
Res = machi_cr_client:trim_chunk(Clnt, NSInfo, File, Offset, Size),
{Res, S};
do_pb_hl_request2({high_checksum_list, File}, #state{high_clnt=Clnt}=S) ->
Res = machi_cr_client:checksum_list(Clnt, File),
{Res, S};
do_pb_hl_request2({high_list_files}, #state{high_clnt=Clnt}=S) ->
Res = machi_cr_client:list_files(Clnt),
{Res, S}.
make_high_clnt(#state{high_clnt=undefined}=S) ->
{ok, Proj} = machi_projection_store:read_latest_projection(
S#state.proj_store, private),
Ps = [P_srvr || {_, P_srvr} <- orddict:to_list(
Proj#projection_v1.members_dict)],
{ok, Clnt} = machi_cr_client:start_link(Ps),
S#state{high_clnt=Clnt};
make_high_clnt(S) ->
S.
todo_perhaps_remind_ns_locator_not_chosen(Op) ->
Key = {?MODULE, Op},
case get(Key) of
undefined ->
io:format(user, "TODO op ~w is using default locator value\n",
[Op]),
put(Key, true);
_ ->
ok
end.

View file

@ -1,118 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc A supervisor to hold dynamic processes inside single
%% FLU service, ranch listener and append server.
%% TODO: This supervisor is maybe useless. First introduced for
%% workaround to start listener dynamically in flu1 initialization
%% phase. Because `machi_flu_psup' is being blocked in flu1
%% initialization time, adding a child to the supervisor leads to
%% deadlock. If initialization can be done only by static arguments,
%% then this supervisor should be removed and added as a direct child
%% of `machi_flu_psup'.
-module(machi_flu1_subsup).
-behaviour(supervisor).
%% public API
-export([start_link/1,
start_append_server/4,
stop_append_server/1,
start_listener/7,
stop_listener/1,
subsup_name/1,
listener_name/1]).
%% supervisor callback
-export([init/1]).
-include("machi_projection.hrl").
-define(SHUTDOWN, 5000).
-define(BACKLOG, 8192).
-spec start_link(pv1_server()) -> {ok, pid()}.
start_link(FluName) ->
supervisor:start_link({local, subsup_name(FluName)}, ?MODULE, []).
-spec start_append_server(pv1_server(), boolean(), boolean(),
undefined | machi_dt:epoch_id()) ->
{ok, pid()}.
start_append_server(FluName, Witness_p, Wedged_p, EpochId) ->
supervisor:start_child(subsup_name(FluName),
append_server_spec(FluName, Witness_p, Wedged_p, EpochId)).
-spec stop_append_server(pv1_server()) -> ok.
stop_append_server(FluName) ->
SubSup = listener_name(FluName),
ok = supervisor:terminate_child(SubSup, FluName),
ok = supervisor:delete_child(SubSup, FluName).
-spec start_listener(pv1_server(), inet:port_number(), boolean(),
string(), ets:tab(), atom() | pid(),
proplists:proplist()) -> {ok, pid()}.
start_listener(FluName, TcpPort, Witness, DataDir, EpochTab, ProjStore,
Props) ->
supervisor:start_child(subsup_name(FluName),
listener_spec(FluName, TcpPort, Witness, DataDir,
EpochTab, ProjStore, Props)).
-spec stop_listener(pv1_server()) -> ok.
stop_listener(FluName) ->
SupName = subsup_name(FluName),
ListenerName = listener_name(FluName),
ok = supervisor:terminate_child(SupName, ListenerName),
ok = supervisor:delete_child(SupName, ListenerName).
-spec subsup_name(pv1_server()) -> atom().
subsup_name(FluName) when is_atom(FluName) ->
list_to_atom(atom_to_list(FluName) ++ "_flu1_subsup").
-spec listener_name(pv1_server()) -> atom().
listener_name(FluName) ->
list_to_atom(atom_to_list(FluName) ++ "_listener").
%% Supervisor callback
init([]) ->
SupFlags = {one_for_all, 1000, 10},
{ok, {SupFlags, []}}.
%% private
-spec listener_spec(pv1_server(), inet:port_number(), boolean(),
string(), ets:tab(), atom() | pid(),
proplists:proplist()) -> supervisor:child_spec().
listener_spec(FluName, TcpPort, Witness, DataDir, EpochTab, ProjStore, Props) ->
ListenerName = listener_name(FluName),
NbAcceptors = 10,
TcpOpts = [{port, TcpPort}, {backlog, ?BACKLOG}],
NetServerOpts = [FluName, Witness, DataDir, EpochTab, ProjStore, Props],
ranch:child_spec(ListenerName, NbAcceptors,
ranch_tcp, TcpOpts,
machi_flu1_net_server, NetServerOpts).
-spec append_server_spec(pv1_server(), boolean(), boolean(),
undefined | machi_dt:epoch_id()) -> supervisor:child_spec().
append_server_spec(FluName, Witness_p, Wedged_p, EpochId) ->
{FluName, {machi_flu1_append_server, start_link,
[FluName, Witness_p, Wedged_p, EpochId]},
permanent, ?SHUTDOWN, worker, [machi_flu1_append_server]}.

View file

@ -51,8 +51,8 @@
-export([ -export([
child_spec/2, child_spec/2,
start_link/2, start_link/2,
find_or_make_filename_from_prefix/4, find_or_make_filename_from_prefix/3,
increment_prefix_sequence/3, increment_prefix_sequence/2,
list_files_by_prefix/2 list_files_by_prefix/2
]). ]).
@ -67,13 +67,12 @@
]). ]).
-define(TIMEOUT, 10 * 1000). -define(TIMEOUT, 10 * 1000).
-include("machi.hrl"). %% included for #ns_info record -include("machi_projection.hrl"). %% included for pv1_epoch_n type
-include("machi_projection.hrl"). %% included for pv1_epoch type
-record(state, {fluname :: atom(), -record(state, {fluname :: atom(),
tid :: ets:tid(), tid :: ets:tid(),
datadir :: string(), datadir :: file:dir(),
epoch :: pv1_epoch() epoch :: pv1_epoch_n()
}). }).
%% public API %% public API
@ -89,30 +88,26 @@ start_link(FluName, DataDir) when is_atom(FluName) andalso is_list(DataDir) ->
gen_server:start_link({local, N}, ?MODULE, [FluName, DataDir], []). gen_server:start_link({local, N}, ?MODULE, [FluName, DataDir], []).
-spec find_or_make_filename_from_prefix( FluName :: atom(), -spec find_or_make_filename_from_prefix( FluName :: atom(),
EpochId :: pv1_epoch(), EpochId :: pv1_epoch_n(),
Prefix :: {prefix, string()}, Prefix :: {prefix, string()} ) ->
machi_dt:ns_info()) ->
{file, Filename :: string()} | {error, Reason :: term() } | timeout. {file, Filename :: string()} | {error, Reason :: term() } | timeout.
% @doc Find the latest available or make a filename from a prefix. A prefix % @doc Find the latest available or make a filename from a prefix. A prefix
% should be in the form of a tagged tuple `{prefix, P}'. Returns a tagged % should be in the form of a tagged tuple `{prefix, P}'. Returns a tagged
% tuple in the form of `{file, F}' or an `{error, Reason}' % tuple in the form of `{file, F}' or an `{error, Reason}'
find_or_make_filename_from_prefix(FluName, EpochId, find_or_make_filename_from_prefix(FluName, EpochId, {prefix, Prefix}) when is_atom(FluName) ->
{prefix, Prefix},
#ns_info{}=NSInfo)
when is_atom(FluName) ->
N = make_filename_mgr_name(FluName), N = make_filename_mgr_name(FluName),
gen_server:call(N, {find_filename, FluName, EpochId, NSInfo, Prefix}, ?TIMEOUT); gen_server:call(N, {find_filename, EpochId, Prefix}, ?TIMEOUT);
find_or_make_filename_from_prefix(_FluName, _EpochId, Other, Other2) -> find_or_make_filename_from_prefix(_FluName, _EpochId, Other) ->
lager:error("~p is not a valid prefix/locator ~p", [Other, Other2]), lager:error("~p is not a valid prefix.", [Other]),
error(badarg). error(badarg).
-spec increment_prefix_sequence( FluName :: atom(), NSInfo :: machi_dt:ns_info(), Prefix :: {prefix, string()} ) -> -spec increment_prefix_sequence( FluName :: atom(), Prefix :: {prefix, string()} ) ->
ok | {error, Reason :: term() } | timeout. ok | {error, Reason :: term() } | timeout.
% @doc Increment the sequence counter for a given prefix. Prefix should % @doc Increment the sequence counter for a given prefix. Prefix should
% be in the form of `{prefix, P}'. % be in the form of `{prefix, P}'.
increment_prefix_sequence(FluName, #ns_info{}=NSInfo, {prefix, Prefix}) when is_atom(FluName) -> increment_prefix_sequence(FluName, {prefix, Prefix}) when is_atom(FluName) ->
gen_server:call(make_filename_mgr_name(FluName), {increment_sequence, NSInfo, Prefix}, ?TIMEOUT); gen_server:call(make_filename_mgr_name(FluName), {increment_sequence, Prefix}, ?TIMEOUT);
increment_prefix_sequence(_FluName, _NSInfo, Other) -> increment_prefix_sequence(_FluName, Other) ->
lager:error("~p is not a valid prefix.", [Other]), lager:error("~p is not a valid prefix.", [Other]),
error(badarg). error(badarg).
@ -130,10 +125,7 @@ list_files_by_prefix(_FluName, Other) ->
%% gen_server API %% gen_server API
init([FluName, DataDir]) -> init([FluName, DataDir]) ->
Tid = ets:new(make_filename_mgr_name(FluName), [named_table, {read_concurrency, true}]), Tid = ets:new(make_filename_mgr_name(FluName), [named_table, {read_concurrency, true}]),
{ok, #state{fluname = FluName, {ok, #state{ fluname = FluName, epoch = 0, datadir = DataDir, tid = Tid }}.
epoch = ?DUMMY_PV1_EPOCH,
datadir = DataDir,
tid = Tid}}.
handle_cast(Req, State) -> handle_cast(Req, State) ->
lager:warning("Got unknown cast ~p", [Req]), lager:warning("Got unknown cast ~p", [Req]),
@ -143,23 +135,23 @@ handle_cast(Req, State) ->
%% the FLU has already validated that the caller's epoch id and the FLU's epoch id %% the FLU has already validated that the caller's epoch id and the FLU's epoch id
%% are the same. So we *assume* that remains the case here - that is to say, we %% are the same. So we *assume* that remains the case here - that is to say, we
%% are not wedged. %% are not wedged.
handle_call({find_filename, FluName, EpochId, NSInfo, Prefix}, _From, handle_call({find_filename, EpochId, Prefix}, _From, S = #state{ datadir = DataDir,
S = #state{ datadir = DataDir, epoch = EpochId, tid = Tid }) -> epoch = EpochId,
tid = Tid }) ->
%% Our state and the caller's epoch ids are the same. Business as usual. %% Our state and the caller's epoch ids are the same. Business as usual.
File = handle_find_file(FluName, Tid, NSInfo, Prefix, DataDir), File = handle_find_file(Tid, Prefix, DataDir),
{reply, {file, File}, S}; {reply, {file, File}, S};
handle_call({find_filename, _FluName, EpochId, NSInfo, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) -> handle_call({find_filename, EpochId, Prefix}, _From, S = #state{ datadir = DataDir, tid = Tid }) ->
%% If the epoch id in our state and the caller's epoch id were the same, it would've %% If the epoch id in our state and the caller's epoch id were the same, it would've
%% matched the above clause. Since we're here, we know that they are different. %% matched the above clause. Since we're here, we know that they are different.
%% If epoch ids between our state and the caller's are different, we must increment the %% If epoch ids between our state and the caller's are different, we must increment the
%% sequence number, generate a filename and then cache it. %% sequence number, generate a filename and then cache it.
File = increment_and_cache_filename(Tid, DataDir, NSInfo, Prefix), File = increment_and_cache_filename(Tid, DataDir, Prefix),
{reply, {file, File}, S#state{epoch = EpochId}}; {reply, {file, File}, S#state{epoch = EpochId}};
handle_call({increment_sequence, #ns_info{name=NS, locator=NSLocator}, Prefix}, _From, S = #state{ datadir = DataDir, tid=Tid }) -> handle_call({increment_sequence, Prefix}, _From, S = #state{ datadir = DataDir }) ->
NSInfo = #ns_info{name=NS, locator=NSLocator}, ok = machi_util:increment_max_filenum(DataDir, Prefix),
_File = increment_and_cache_filename(Tid, DataDir, NSInfo, Prefix),
{reply, ok, S}; {reply, ok, S};
handle_call({list_files, Prefix}, From, S = #state{ datadir = DataDir }) -> handle_call({list_files, Prefix}, From, S = #state{ datadir = DataDir }) ->
spawn(fun() -> spawn(fun() ->
@ -192,41 +184,65 @@ generate_uuid_v4_str() ->
io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b", io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b",
[A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]). [A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]).
find_file(DataDir, Prefix, N) ->
{_Filename, Path} = machi_util:make_data_filename(DataDir, Prefix, "*", N),
filelib:wildcard(Path).
list_files(DataDir, Prefix) -> list_files(DataDir, Prefix) ->
{F_bin, Path} = machi_util:make_data_filename(DataDir, "*^" ++ Prefix ++ "^*"), {F, Path} = machi_util:make_data_filename(DataDir, Prefix, "*", "*"),
filelib:wildcard(binary_to_list(F_bin), filename:dirname(Path)). filelib:wildcard(F, filename:dirname(Path)).
make_filename_mgr_name(FluName) when is_atom(FluName) -> make_filename_mgr_name(FluName) when is_atom(FluName) ->
list_to_atom(atom_to_list(FluName) ++ "_filename_mgr"). list_to_atom(atom_to_list(FluName) ++ "_filename_mgr").
handle_find_file(_FluName, Tid, #ns_info{name=NS, locator=NSLocator}, Prefix, DataDir) -> handle_find_file(Tid, Prefix, DataDir) ->
case ets:lookup(Tid, {NS, NSLocator, Prefix}) of N = machi_util:read_max_filenum(DataDir, Prefix),
{File, Cleanup} = case find_file(DataDir, Prefix, N) of
[] -> [] ->
N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), {find_or_make_filename(Tid, DataDir, Prefix, N), false};
F = generate_filename(DataDir, NS, NSLocator, Prefix, N), [H] -> {H, true};
true = ets:insert(Tid, {{NS, NSLocator, Prefix}, F}), [Fn | _ ] = L ->
lager:warning(
"Searching for a matching file to prefix ~p and sequence number ~p gave multiples: ~p",
[Prefix, N, L]),
{Fn, true}
end,
maybe_cleanup(Tid, {Prefix, N}, Cleanup),
filename:basename(File).
find_or_make_filename(Tid, DataDir, Prefix, N) ->
case ets:lookup(Tid, {Prefix, N}) of
[] ->
F = generate_filename(DataDir, Prefix, N),
true = ets:insert_new(Tid, {{Prefix, N}, F}),
F; F;
[{_Key, File}] -> [{_Key, File}] ->
File File
end. end.
generate_filename(DataDir, NS, NSLocator, Prefix, N) -> generate_filename(DataDir, Prefix, N) ->
{F, _Q} = machi_util:make_data_filename( {F, _} = machi_util:make_data_filename(
DataDir, DataDir,
NS, NSLocator, Prefix, Prefix,
generate_uuid_v4_str(), generate_uuid_v4_str(),
N), N),
binary_to_list(F). binary_to_list(F).
increment_and_cache_filename(Tid, DataDir, #ns_info{name=NS,locator=NSLocator}, Prefix) -> maybe_cleanup(_Tid, _Key, false) ->
ok = machi_util:increment_max_filenum(DataDir, NS, NSLocator, Prefix), ok;
N = machi_util:read_max_filenum(DataDir, NS, NSLocator, Prefix), maybe_cleanup(Tid, Key, true) ->
F = generate_filename(DataDir, NS, NSLocator, Prefix, N), true = ets:delete(Tid, Key).
true = ets:insert(Tid, {{NS, NSLocator, Prefix}, F}),
F. increment_and_cache_filename(Tid, DataDir, Prefix) ->
ok = machi_util:increment_max_filenum(DataDir, Prefix),
N = machi_util:read_max_filenum(DataDir, Prefix),
F = generate_filename(DataDir, Prefix, N),
true = ets:insert_new(Tid, {{Prefix, N}, F}),
filename:basename(F).
-ifdef(TEST). -ifdef(TEST).
-endif. -endif.

View file

@ -34,19 +34,15 @@
-module(machi_flu_metadata_mgr). -module(machi_flu_metadata_mgr).
-behaviour(gen_server). -behaviour(gen_server).
-include("machi.hrl").
-define(MAX_MGRS, 10). %% number of managers to start by default. -define(MAX_MGRS, 10). %% number of managers to start by default.
-define(HASH(X), erlang:phash2(X)). %% hash algorithm to use -define(HASH(X), erlang:phash2(X)). %% hash algorithm to use
-define(TIMEOUT, 10 * 1000). %% 10 second timeout -define(TIMEOUT, 10 * 1000). %% 10 second timeout
-define(KNOWN_FILES_LIST_PREFIX, "known_files_").
-record(state, {fluname :: atom(), -record(state, {fluname :: atom(),
datadir :: string(), datadir :: string(),
tid :: ets:tid(), tid :: ets:tid(),
cnt :: non_neg_integer(), cnt :: non_neg_integer()
trimmed_files :: machi_plist:plist()
}). }).
%% This record goes in the ets table where filename is the key %% This record goes in the ets table where filename is the key
@ -63,9 +59,7 @@
lookup_proxy_pid/2, lookup_proxy_pid/2,
start_proxy_pid/2, start_proxy_pid/2,
stop_proxy_pid/2, stop_proxy_pid/2,
stop_proxy_pid_rollover/2, build_metadata_mgr_name/2
build_metadata_mgr_name/2,
trim_file/2
]). ]).
%% gen_server callbacks %% gen_server callbacks
@ -101,29 +95,12 @@ start_proxy_pid(FluName, {file, Filename}) ->
gen_server:call(get_manager_atom(FluName, Filename), {start_proxy_pid, Filename}, ?TIMEOUT). gen_server:call(get_manager_atom(FluName, Filename), {start_proxy_pid, Filename}, ?TIMEOUT).
stop_proxy_pid(FluName, {file, Filename}) -> stop_proxy_pid(FluName, {file, Filename}) ->
gen_server:call(get_manager_atom(FluName, Filename), {stop_proxy_pid, false, Filename}, ?TIMEOUT). gen_server:call(get_manager_atom(FluName, Filename), {stop_proxy_pid, Filename}, ?TIMEOUT).
stop_proxy_pid_rollover(FluName, {file, Filename}) ->
gen_server:call(get_manager_atom(FluName, Filename), {stop_proxy_pid, true, Filename}, ?TIMEOUT).
trim_file(FluName, {file, Filename}) ->
gen_server:call(get_manager_atom(FluName, Filename), {trim_file, Filename}, ?TIMEOUT).
%% gen_server callbacks %% gen_server callbacks
init([FluName, Name, DataDir, Num]) -> init([FluName, Name, DataDir, Num]) ->
%% important: we'll need another persistent storage to
%% remember deleted (trimmed) file, to prevent resurrection after
%% flu restart and append.
FileListFileName =
filename:join([DataDir, ?KNOWN_FILES_LIST_PREFIX ++ atom_to_list(FluName)]),
{ok, PList} = machi_plist:open(FileListFileName, []),
%% TODO make sure all files non-existent, if any remaining files
%% here, just delete it. They're in the list *because* they're all
%% trimmed.
Tid = ets:new(Name, [{keypos, 2}, {read_concurrency, true}, {write_concurrency, true}]), Tid = ets:new(Name, [{keypos, 2}, {read_concurrency, true}, {write_concurrency, true}]),
{ok, #state{fluname = FluName, datadir = DataDir, tid = Tid, cnt = Num, {ok, #state{ fluname = FluName, datadir = DataDir, tid = Tid, cnt = Num}}.
trimmed_files=PList}}.
handle_cast(Req, State) -> handle_cast(Req, State) ->
lager:warning("Got unknown cast ~p", [Req]), lager:warning("Got unknown cast ~p", [Req]),
@ -136,11 +113,7 @@ handle_call({proxy_pid, Filename}, _From, State = #state{ tid = Tid }) ->
end, end,
{reply, Reply, State}; {reply, Reply, State};
handle_call({start_proxy_pid, Filename}, _From, handle_call({start_proxy_pid, Filename}, _From, State = #state{ fluname = N, tid = Tid, datadir = D }) ->
State = #state{ fluname = N, tid = Tid, datadir = D,
trimmed_files=TrimmedFiles}) ->
case machi_plist:find(TrimmedFiles, Filename) of
false ->
NewR = case lookup_md(Tid, Filename) of NewR = case lookup_md(Tid, Filename) of
not_found -> not_found ->
start_file_proxy(N, D, Filename); start_file_proxy(N, D, Filename);
@ -151,11 +124,7 @@ handle_call({start_proxy_pid, Filename}, _From,
end, end,
update_ets(Tid, NewR), update_ets(Tid, NewR),
{reply, {ok, NewR#md.proxy_pid}, State}; {reply, {ok, NewR#md.proxy_pid}, State};
true -> handle_call({stop_proxy_pid, Filename}, _From, State = #state{ tid = Tid }) ->
{reply, {error, trimmed}, State}
end;
handle_call({stop_proxy_pid, Rollover_p, Filename}, _From, State = #state{ tid = Tid }) ->
case lookup_md(Tid, Filename) of case lookup_md(Tid, Filename) of
not_found -> not_found ->
ok; ok;
@ -163,25 +132,11 @@ handle_call({stop_proxy_pid, Rollover_p, Filename}, _From, State = #state{ tid =
ok; ok;
#md{ proxy_pid = Pid, mref = M } = R -> #md{ proxy_pid = Pid, mref = M } = R ->
demonitor(M, [flush]), demonitor(M, [flush]),
if Rollover_p ->
do_rollover(Filename, State);
true ->
machi_file_proxy:stop(Pid), machi_file_proxy:stop(Pid),
update_ets(Tid, R#md{ proxy_pid = undefined, update_ets(Tid, R#md{ proxy_pid = undefined, mref = undefined })
mref = undefined })
end
end, end,
{reply, ok, State}; {reply, ok, State};
handle_call({trim_file, Filename}, _,
S = #state{trimmed_files = TrimmedFiles }) ->
case machi_plist:add(TrimmedFiles, Filename) of
{ok, TrimmedFiles2} ->
{reply, ok, S#state{trimmed_files=TrimmedFiles2}};
Error ->
{reply, Error, S}
end;
handle_call(Req, From, State) -> handle_call(Req, From, State) ->
lager:warning("Got unknown call ~p from ~p", [Req, From]), lager:warning("Got unknown call ~p from ~p", [Req, From]),
{reply, hoge, State}. {reply, hoge, State}.
@ -191,25 +146,41 @@ handle_info({'DOWN', Mref, process, Pid, normal}, State = #state{ tid = Tid }) -
clear_ets(Tid, Mref), clear_ets(Tid, Mref),
{noreply, State}; {noreply, State};
handle_info({'DOWN', Mref, process, Pid, file_rollover}, State = #state{ fluname = FluName,
tid = Tid }) ->
lager:info("file proxy ~p shutdown because of file rollover", [Pid]),
R = get_md_record_by_mref(Tid, Mref),
[Prefix | _Rest] = machi_util:parse_filename({file, R#md.filename}),
%% We only increment the counter here. The filename will be generated on the
%% next append request to that prefix and since the filename will have a new
%% sequence number it probably will be associated with a different metadata
%% manager. That's why we don't want to generate a new file name immediately
%% and use it to start a new file proxy.
ok = machi_flu_filename_mgr:increment_prefix_sequence(FluName, {prefix, Prefix}),
%% purge our ets table of this entry completely since it is likely the
%% new filename (whenever it comes) will be in a different manager than
%% us.
purge_ets(Tid, R),
{noreply, State};
handle_info({'DOWN', Mref, process, Pid, wedged}, State = #state{ tid = Tid }) -> handle_info({'DOWN', Mref, process, Pid, wedged}, State = #state{ tid = Tid }) ->
lager:error("file proxy ~p shutdown because it's wedged", [Pid]), lager:error("file proxy ~p shutdown because it's wedged", [Pid]),
clear_ets(Tid, Mref), clear_ets(Tid, Mref),
{noreply, State}; {noreply, State};
handle_info({'DOWN', _Mref, process, Pid, trimmed}, State = #state{ tid = _Tid }) ->
lager:debug("file proxy ~p shutdown because the file was trimmed", [Pid]),
{noreply, State};
handle_info({'DOWN', Mref, process, Pid, Error}, State = #state{ tid = Tid }) -> handle_info({'DOWN', Mref, process, Pid, Error}, State = #state{ tid = Tid }) ->
lager:error("file proxy ~p shutdown because ~p", [Pid, Error]), lager:error("file proxy ~p shutdown because ~p", [Pid, Error]),
clear_ets(Tid, Mref), clear_ets(Tid, Mref),
{noreply, State}; {noreply, State};
handle_info(Info, State) -> handle_info(Info, State) ->
lager:warning("Got unknown info ~p", [Info]), lager:warning("Got unknown info ~p", [Info]),
{noreply, State}. {noreply, State}.
terminate(Reason, _State = #state{trimmed_files=TrimmedFiles}) -> terminate(Reason, _State) ->
lager:info("Shutting down because ~p", [Reason]), lager:info("Shutting down because ~p", [Reason]),
machi_plist:close(TrimmedFiles),
ok. ok.
code_change(_OldVsn, State, _Extra) -> code_change(_OldVsn, State, _Extra) ->
@ -257,41 +228,14 @@ clear_ets(Tid, Mref) ->
update_ets(Tid, R#md{ proxy_pid = undefined, mref = undefined }). update_ets(Tid, R#md{ proxy_pid = undefined, mref = undefined }).
purge_ets(Tid, R) -> purge_ets(Tid, R) ->
true = ets:delete_object(Tid, R). ok = ets:delete_object(Tid, R).
get_md_record_by_mref(Tid, Mref) -> get_md_record_by_mref(Tid, Mref) ->
[R] = ets:match_object(Tid, {md, '_', '_', Mref}), [R] = ets:match_object(Tid, {md, '_', '_', Mref}),
R. R.
get_md_record_by_filename(Tid, Filename) ->
[R] = ets:lookup(Tid, Filename),
R.
get_env(Setting, Default) -> get_env(Setting, Default) ->
case application:get_env(machi, Setting) of case application:get_env(machi, Setting) of
undefined -> Default; undefined -> Default;
{ok, V} -> V {ok, V} -> V
end. end.
do_rollover(Filename, _State = #state{ fluname = FluName,
tid = Tid }) ->
R = get_md_record_by_filename(Tid, Filename),
lager:info("file ~p proxy ~p shutdown because of file rollover",
[Filename, R#md.proxy_pid]),
{Prefix, NS, NSLocator, _, _} =
machi_util:parse_filename(R#md.filename),
%% We only increment the counter here. The filename will be generated on the
%% next append request to that prefix and since the filename will have a new
%% sequence number it probably will be associated with a different metadata
%% manager. That's why we don't want to generate a new file name immediately
%% and use it to start a new file proxy.
NSInfo = #ns_info{name=NS, locator=NSLocator},
lager:warning("INCR: ~p ~p\n", [FluName, Prefix]),
ok = machi_flu_filename_mgr:increment_prefix_sequence(FluName, NSInfo, {prefix, Prefix}),
%% purge our ets table of this entry completely since it is likely the
%% new filename (whenever it comes) will be in a different manager than
%% us.
purge_ets(Tid, R),
ok.

View file

@ -83,8 +83,6 @@
%% Supervisor callbacks %% Supervisor callbacks
-export([init/1]). -export([init/1]).
make_package_spec(#p_srvr{name=FluName, port=TcpPort, props=Props}) when is_list(Props) ->
make_package_spec({FluName, TcpPort, Props});
make_package_spec({FluName, TcpPort, Props}) when is_list(Props) -> make_package_spec({FluName, TcpPort, Props}) when is_list(Props) ->
FluDataDir = get_env(flu_data_dir, undefined_is_invalid), FluDataDir = get_env(flu_data_dir, undefined_is_invalid),
MyDataDir = filename:join(FluDataDir, atom_to_list(FluName)), MyDataDir = filename:join(FluDataDir, atom_to_list(FluName)),
@ -96,7 +94,7 @@ make_package_spec(FluName, TcpPort, DataDir, Props) ->
permanent, ?SHUTDOWN, supervisor, []}. permanent, ?SHUTDOWN, supervisor, []}.
start_flu_package(#p_srvr{name=FluName, port=TcpPort, props=Props}) -> start_flu_package(#p_srvr{name=FluName, port=TcpPort, props=Props}) ->
DataDir = get_data_dir(FluName, Props), DataDir = get_data_dir(Props),
start_flu_package(FluName, TcpPort, DataDir, Props). start_flu_package(FluName, TcpPort, DataDir, Props).
start_flu_package(FluName, TcpPort, DataDir, Props) -> start_flu_package(FluName, TcpPort, DataDir, Props) ->
@ -145,19 +143,16 @@ init([FluName, TcpPort, DataDir, Props0]) ->
FProxySupSpec = machi_file_proxy_sup:child_spec(FluName), FProxySupSpec = machi_file_proxy_sup:child_spec(FluName),
Flu1SubSupSpec = {machi_flu1_subsup:subsup_name(FluName),
{machi_flu1_subsup, start_link, [FluName]},
permanent, ?SHUTDOWN, supervisor, []},
FluSpec = {FluName, FluSpec = {FluName,
{machi_flu1, start_link, {machi_flu1, start_link,
[ [{FluName, TcpPort, DataDir}|Props] ]}, [ [{FluName, TcpPort, DataDir}|Props] ]},
permanent, ?SHUTDOWN, worker, []}, permanent, ?SHUTDOWN, worker, []},
{ok, {SupFlags, [ {ok, {SupFlags, [
ProjSpec, FitnessSpec, MgrSpec, ProjSpec, FitnessSpec, MgrSpec,
FProxySupSpec, FNameMgrSpec, MetaSupSpec, FProxySupSpec, FNameMgrSpec, MetaSupSpec,
Flu1SubSupSpec, FluSpec]}}. FluSpec]}}.
make_flu_regname(FluName) when is_atom(FluName) -> make_flu_regname(FluName) when is_atom(FluName) ->
FluName. FluName.
@ -180,11 +175,8 @@ get_env(Setting, Default) ->
{ok, V} -> V {ok, V} -> V
end. end.
get_data_dir(FluName, Props) -> get_data_dir(Props) ->
case proplists:get_value(data_dir, Props) of case proplists:get_value(data_dir, Props) of
Path when is_list(Path) -> Path when is_list(Path) ->
Path; Path
undefined ->
{ok, Dir} = application:get_env(machi, flu_data_dir),
Dir ++ "/" ++ atom_to_list(FluName)
end. end.

View file

@ -21,9 +21,6 @@
%% @doc Supervisor for Machi FLU servers and their related support %% @doc Supervisor for Machi FLU servers and their related support
%% servers. %% servers.
%% %%
%% Responsibility for managing FLU and chain lifecycle after the initial
%% application startup is delegated to {@link machi_lifecycle_mgr}.
%%
%% See {@link machi_flu_psup} for an illustration of the entire Machi %% See {@link machi_flu_psup} for an illustration of the entire Machi
%% application process structure. %% application process structure.
@ -32,11 +29,8 @@
-behaviour(supervisor). -behaviour(supervisor).
-include("machi.hrl"). -include("machi.hrl").
-include("machi_projection.hrl").
-include("machi_verbose.hrl"). -include("machi_verbose.hrl").
-ifdef(TEST).
-compile(export_all).
-ifdef(PULSE). -ifdef(PULSE).
-compile({parse_transform, pulse_instrument}). -compile({parse_transform, pulse_instrument}).
-include_lib("pulse_otp/include/pulse_otp.hrl"). -include_lib("pulse_otp/include/pulse_otp.hrl").
@ -44,12 +38,9 @@
-else. -else.
-define(SHUTDOWN, 5000). -define(SHUTDOWN, 5000).
-endif. -endif.
-endif. %TEST
%% API %% API
-export([start_link/0, -export([start_link/0]).
get_initial_flus/0, load_rc_d_files_from_dir/1,
sanitize_p_srvr_records/1]).
%% Supervisor callbacks %% Supervisor callbacks
-export([init/1]). -export([init/1]).
@ -78,66 +69,5 @@ get_initial_flus() ->
[]. [].
-else. % PULSE -else. % PULSE
get_initial_flus() -> get_initial_flus() ->
DoesNotExist = "/tmp/does/not/exist", application:get_env(machi, initial_flus, []).
ConfigDir = case application:get_env(machi, flu_config_dir, DoesNotExist) of
DoesNotExist ->
DoesNotExist;
Dir ->
Dir
end,
Ps = [P || {_File, P} <- load_rc_d_files_from_dir(ConfigDir)],
sanitize_p_srvr_records(Ps).
-endif. % PULSE -endif. % PULSE
load_rc_d_files_from_dir(Dir) ->
Files = filelib:wildcard(Dir ++ "/*"),
[case file:consult(File) of
{ok, [X]} ->
{File, X};
_ ->
lager:warning("Error parsing file '~s', ignoring",
[File]),
{File, []}
end || File <- Files].
sanitize_p_srvr_records(Ps) ->
{Sane, _} = lists:foldl(fun sanitize_p_srvr_rec/2, {[], dict:new()}, Ps),
Sane.
sanitize_p_srvr_rec(Whole, {Acc, D}) ->
try
#p_srvr{name=Name,
proto_mod=PMod,
address=Address,
port=Port,
props=Props} = Whole,
true = is_atom(Name),
NameK = {name, Name},
error = dict:find(NameK, D),
true = is_atom(PMod),
case code:is_loaded(PMod) of
{file, _} ->
ok;
_ ->
{module, _} = code:load_file(PMod),
ok
end,
if is_list(Address) -> ok;
is_tuple(Address) -> ok % Erlang-style IPv4 or IPv6
end,
true = is_integer(Port) andalso Port >= 1024 andalso Port =< 65534,
PortK = {port, Port},
error = dict:find(PortK, D),
true = is_list(Props),
%% All is sane enough.
D2 = dict:store(NameK, Name,
dict:store(PortK, Port, D)),
{[Whole|Acc], D2}
catch _:_ ->
_ = lager:log(error, self(),
"~s: Bad (or duplicate name/port) p_srvr record, "
"skipping: ~P\n",
[?MODULE, Whole, 15]),
{Acc, D}
end.

File diff suppressed because it is too large Load diff

View file

@ -1,156 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc Creates a Merkle tree per file based on the checksum data for
%% a given data file.
%%
%% The `naive' implementation representation is:
%%
%% `<<Length:64, Offset:32, 0>>' for unwritten bytes
%% `<<Length:64, Offset:32, 1>>' for trimmed bytes
%% `<<Length:64, Offset:32, Csum/binary>>' for written bytes
%%
%% The tree feeds these leaf nodes into hashes representing chunks of a minimum
%% size of at least 1024 KB (1 MB), but if the file size is larger, we will try
%% to get about 100 chunks for the first rollup "Level 1." We aim for around 10
%% hashes at level 2, and then 2 hashes level 3 and finally the root.
-module(machi_merkle_tree).
-include("machi.hrl").
-include("machi_merkle_tree.hrl").
-ifdef(TEST).
-compile(export_all).
-else.
-export([
open/2,
open/3,
tree/1,
filename/1,
diff/2
]).
-endif.
-define(TRIMMED, <<1>>).
-define(UNWRITTEN, <<0>>).
-define(NAIVE_ENCODE(Offset, Size, Data), <<Offset:64/unsigned-big, Size:32/unsigned-big, Data/binary>>).
-define(MINIMUM_CHUNK, 1048576). %% 1024 * 1024
-define(LEVEL_SIZE, 10).
-define(H, sha).
%% public API
open(Filename, DataDir) ->
open(Filename, DataDir, naive).
open(Filename, DataDir, Type) ->
Tree = load_filename(Filename, DataDir, Type),
{ok, #mt{ filename = Filename, tree = Tree, backend = Type}}.
tree(#mt{ tree = T, backend = naive }) ->
case T#naive.recalc of
true -> build_tree(T);
false -> T
end.
filename(#mt{ filename = F }) -> F.
diff(#mt{backend = naive, tree = T1}, #mt{backend = naive, tree = T2}) ->
case T1#naive.root == T2#naive.root of
true -> same;
false -> naive_diff(T1, T2)
end;
diff(_, _) -> error(badarg).
%% private
% @private
load_filename(Filename, DataDir, naive) ->
{Last, M} = do_load(Filename, DataDir, fun insert_csum_naive/2, []),
ChunkSize = max(?MINIMUM_CHUNK, Last div 100),
T = #naive{ leaves = lists:reverse(M), chunk_size = ChunkSize, recalc = true },
build_tree(T).
do_load(Filename, DataDir, FoldFun, AccInit) ->
CsumFile = machi_util:make_checksum_filename(DataDir, Filename),
{ok, T} = machi_csum_table:open(CsumFile, []),
Acc = machi_csum_table:foldl_chunks(FoldFun, {0, AccInit}, T),
ok = machi_csum_table:close(T),
Acc.
% @private
insert_csum_naive({Last, Size, _Csum}=In, {Last, MT}) ->
%% no gap
{Last+Size, update_acc(In, MT)};
insert_csum_naive({Offset, Size, _Csum}=In, {Last, MT}) ->
Hole = Offset - Last,
MT0 = update_acc({Last, Hole, unwritten}, MT),
{Offset+Size, update_acc(In, MT0)}.
% @private
update_acc({Offset, Size, unwritten}, MT) ->
[ {Offset, Size, ?NAIVE_ENCODE(Offset, Size, ?UNWRITTEN)} | MT ];
update_acc({Offset, Size, trimmed}, MT) ->
[ {Offset, Size, ?NAIVE_ENCODE(Offset, Size, ?TRIMMED)} | MT ];
update_acc({Offset, Size, <<_Tag:8, Csum/binary>>}, MT) ->
[ {Offset, Size, ?NAIVE_ENCODE(Offset, Size, Csum)} | MT ].
build_tree(MT = #naive{ leaves = L, chunk_size = ChunkSize }) ->
Lvl1s = build_level_1(ChunkSize, L, 1, [ crypto:hash_init(?H) ]),
Mod2 = length(Lvl1s) div ?LEVEL_SIZE,
Lvl2s = build_int_level(Mod2, Lvl1s, 1, [ crypto:hash_init(?H) ]),
Mod3 = length(Lvl2s) div 2,
Lvl3s = build_int_level(Mod3, Lvl2s, 1, [ crypto:hash_init(?H) ]),
Root = build_root(Lvl3s, crypto:hash_init(?H)),
MT#naive{ root = Root, lvl1 = Lvl1s, lvl2 = Lvl2s, lvl3 = Lvl3s, recalc = false }.
build_root([], Ctx) ->
crypto:hash_final(Ctx);
build_root([H|T], Ctx) ->
build_root(T, crypto:hash_update(Ctx, H)).
build_int_level(_Mod, [], _Cnt, [ Ctx | Rest ]) ->
lists:reverse( [ crypto:hash_final(Ctx) | Rest ] );
build_int_level(Mod, [H|T], Cnt, [ Ctx | Rest ]) when Cnt rem Mod == 0 ->
NewCtx = crypto:hash_init(?H),
build_int_level(Mod, T, Cnt + 1, [ crypto:hash_update(NewCtx, H), crypto:hash_final(Ctx) | Rest ]);
build_int_level(Mod, [H|T], Cnt, [ Ctx | Rest ]) ->
build_int_level(Mod, T, Cnt+1, [ crypto:hash_update(Ctx, H) | Rest ]).
build_level_1(_Size, [], _Multiple, [ Ctx | Rest ]) ->
lists:reverse([ crypto:hash_final(Ctx) | Rest ]);
build_level_1(Size, [{Pos, Len, Hash}|T], Multiple, [ Ctx | Rest ])
when ( Pos + Len ) > ( Size * Multiple ) ->
NewCtx = crypto:hash_init(?H),
build_level_1(Size, T, Multiple+1,
[ crypto:hash_update(NewCtx, Hash), crypto:hash_final(Ctx) | Rest ]);
build_level_1(Size, [{Pos, Len, Hash}|T], Multiple, [ Ctx | Rest ])
when ( Pos + Len ) =< ( Size * Multiple ) ->
build_level_1(Size, T, Multiple, [ crypto:hash_update(Ctx, Hash) | Rest ]).
naive_diff(#naive{lvl1 = L1}, #naive{lvl1=L2, chunk_size=CS2}) ->
Set1 = gb_sets:from_list(lists:zip(lists:seq(1, length(L1)), L1)),
Set2 = gb_sets:from_list(lists:zip(lists:seq(1, length(L2)), L2)),
%% The byte ranges in list 2 that do not match in list 1
%% Or should we do something else?
[ {(X-1)*CS2, CS2, SHA} || {X, SHA} <- gb_sets:to_list(gb_sets:subtract(Set1, Set2)) ].

View file

@ -25,10 +25,6 @@
%% to a single socket connection, and there is no code to deal with %% to a single socket connection, and there is no code to deal with
%% multiple connections/load balancing/error handling to several/all %% multiple connections/load balancing/error handling to several/all
%% Machi cluster servers. %% Machi cluster servers.
%%
%% Please see {@link machi_flu1_client} the "Client API implemntation notes"
%% section for how this module relates to the rest of the client API
%% implementation.
-module(machi_pb_high_client). -module(machi_pb_high_client).
@ -44,7 +40,7 @@
auth/3, auth/4, auth/3, auth/4,
append_chunk/6, append_chunk/7, append_chunk/6, append_chunk/7,
write_chunk/5, write_chunk/6, write_chunk/5, write_chunk/6,
read_chunk/5, read_chunk/6, read_chunk/4, read_chunk/5,
trim_chunk/4, trim_chunk/5, trim_chunk/4, trim_chunk/5,
checksum_list/2, checksum_list/3, checksum_list/2, checksum_list/3,
list_files/1, list_files/2 list_files/1, list_files/2
@ -62,114 +58,54 @@
count=0 :: non_neg_integer() count=0 :: non_neg_integer()
}). }).
%% Official error types that is specific in Machi
-type machi_client_error_reason() :: bad_arg | wedged | bad_checksum |
partition | not_written | written |
trimmed | no_such_file | partial_read |
bad_epoch | inet:posix().
%% @doc Creates a client process
-spec start_link(p_srvr_dict()) -> {ok, pid()} | {error, machi_client_error_reason()}.
start_link(P_srvr_list) -> start_link(P_srvr_list) ->
gen_server:start_link(?MODULE, [P_srvr_list], []). gen_server:start_link(?MODULE, [P_srvr_list], []).
%% @doc Stops a client process.
-spec quit(pid()) -> ok.
quit(PidSpec) -> quit(PidSpec) ->
gen_server:call(PidSpec, quit, infinity). gen_server:call(PidSpec, quit, infinity).
connected_p(PidSpec) -> connected_p(PidSpec) ->
gen_server:call(PidSpec, connected_p, infinity). gen_server:call(PidSpec, connected_p, infinity).
-spec echo(pid(), string()) -> {ok, string()} | {error, machi_client_error_reason()}.
echo(PidSpec, String) -> echo(PidSpec, String) ->
echo(PidSpec, String, ?DEFAULT_TIMEOUT). echo(PidSpec, String, ?DEFAULT_TIMEOUT).
-spec echo(pid(), string(), non_neg_integer()) -> {ok, string()} | {error, machi_client_error_reason()}.
echo(PidSpec, String, Timeout) -> echo(PidSpec, String, Timeout) ->
send_sync(PidSpec, {echo, String}, Timeout). send_sync(PidSpec, {echo, String}, Timeout).
%% TODO: auth() is not implemented. Auth requires SSL, and this client %% TODO: auth() is not implemented. Auth requires SSL, and this client
%% doesn't support SSL yet. This is just a placeholder and reminder. %% doesn't support SSL yet. This is just a placeholder and reminder.
-spec auth(pid(), string(), string()) -> ok | {error, machi_client_error_reason()}.
auth(PidSpec, User, Pass) -> auth(PidSpec, User, Pass) ->
auth(PidSpec, User, Pass, ?DEFAULT_TIMEOUT). auth(PidSpec, User, Pass, ?DEFAULT_TIMEOUT).
-spec auth(pid(), string(), string(), non_neg_integer()) -> ok | {error, machi_client_error_reason()}.
auth(PidSpec, User, Pass, Timeout) -> auth(PidSpec, User, Pass, Timeout) ->
send_sync(PidSpec, {auth, User, Pass}, Timeout). send_sync(PidSpec, {auth, User, Pass}, Timeout).
-spec append_chunk(pid(), append_chunk(PidSpec, PlacementKey, Prefix, Chunk, CSum, ChunkExtra) ->
NS::machi_dt:namespace(), Prefix::machi_dt:file_prefix(), append_chunk(PidSpec, PlacementKey, Prefix, Chunk, CSum, ChunkExtra, ?DEFAULT_TIMEOUT).
Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum(),
Opts::machi_dt:append_opts()) ->
{ok, Filename::string(), Offset::machi_dt:file_offset()} |
{error, machi_client_error_reason()}.
append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts) ->
append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts, ?DEFAULT_TIMEOUT).
-spec append_chunk(pid(), append_chunk(PidSpec, PlacementKey, Prefix, Chunk, CSum, ChunkExtra, Timeout) ->
NS::machi_dt:namespace(), Prefix::machi_dt:file_prefix(), send_sync(PidSpec, {append_chunk, PlacementKey, Prefix, Chunk, CSum, ChunkExtra}, Timeout).
Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum(),
Opts::machi_dt:append_opts(),
Timeout::non_neg_integer()) ->
{ok, Filename::string(), Offset::machi_dt:file_offset()} |
{error, machi_client_error_reason()}.
append_chunk(PidSpec, NS, Prefix, Chunk, CSum, Opts, Timeout) ->
send_sync(PidSpec, {append_chunk, NS, Prefix, Chunk, CSum, Opts}, Timeout).
-spec write_chunk(pid(), File::string(), machi_dt:file_offset(),
Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum()) ->
ok | {error, machi_client_error_reason()}.
write_chunk(PidSpec, File, Offset, Chunk, CSum) -> write_chunk(PidSpec, File, Offset, Chunk, CSum) ->
write_chunk(PidSpec, File, Offset, Chunk, CSum, ?DEFAULT_TIMEOUT). write_chunk(PidSpec, File, Offset, Chunk, CSum, ?DEFAULT_TIMEOUT).
-spec write_chunk(pid(), File::string(), machi_dt:file_offset(),
Chunk::machi_dt:chunk(), CSum::machi_dt:chunk_csum(), Timeout::non_neg_integer()) ->
ok | {error, machi_client_error_reason()}.
write_chunk(PidSpec, File, Offset, Chunk, CSum, Timeout) -> write_chunk(PidSpec, File, Offset, Chunk, CSum, Timeout) ->
send_sync(PidSpec, {write_chunk, File, Offset, Chunk, CSum}, Timeout). send_sync(PidSpec, {write_chunk, File, Offset, Chunk, CSum}, Timeout).
%% @doc Tries to read a chunk of a specified file. It returns `{ok, read_chunk(PidSpec, File, Offset, Size) ->
%% {Chunks, TrimmedChunks}}' for live file while it returns `{error, read_chunk(PidSpec, File, Offset, Size, ?DEFAULT_TIMEOUT).
%% trimmed}' if all bytes of the file was trimmed.
-spec read_chunk(pid(), File::string(), machi_dt:file_offset(), machi_dt:chunk_size(),
machi_dt:read_opts_x()) ->
{ok, {Chunks::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), binary()}],
Trimmed::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size()}]}} |
{error, machi_client_error_reason()}.
read_chunk(PidSpec, File, Offset, Size, Opts) ->
read_chunk(PidSpec, File, Offset, Size, Opts, ?DEFAULT_TIMEOUT).
-spec read_chunk(pid(), File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), read_chunk(PidSpec, File, Offset, Size, Timeout) ->
machi_dt:read_opts_x(), send_sync(PidSpec, {read_chunk, File, Offset, Size}, Timeout).
Timeout::non_neg_integer()) ->
{ok, {Chunks::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size(), binary()}],
Trimmed::[{File::string(), machi_dt:file_offset(), machi_dt:chunk_size()}]}} |
{error, machi_client_error_reason()}.
read_chunk(PidSpec, File, Offset, Size, Opts0, Timeout) ->
Opts = machi_util:read_opts_default(Opts0),
send_sync(PidSpec, {read_chunk, File, Offset, Size, Opts}, Timeout).
%% @doc Trims arbitrary binary range of any file. If a specified range
%% has any byte trimmed, it fails and returns `{error, trimmed}'.
%% Otherwise it trims all bytes in that range. If there are
%% overlapping chunks with client-specified checksum, they will cut
%% off and checksum are re-calculated in server side. TODO: Add
%% option specifying whether to trigger GC.
-spec trim_chunk(pid(), string(), non_neg_integer(), machi_dt:chunk_size()) ->
ok | {error, machi_client_error_reason()}.
trim_chunk(PidSpec, File, Offset, Size) -> trim_chunk(PidSpec, File, Offset, Size) ->
trim_chunk(PidSpec, File, Offset, Size, ?DEFAULT_TIMEOUT). trim_chunk(PidSpec, File, Offset, Size, ?DEFAULT_TIMEOUT).
trim_chunk(PidSpec, File, Offset, Size, Timeout) -> trim_chunk(PidSpec, File, Offset, Size, Timeout) ->
send_sync(PidSpec, {trim_chunk, File, Offset, Size}, Timeout). send_sync(PidSpec, {trim_chunk, File, Offset, Size}, Timeout).
%% @doc Returns a binary that has checksums and chunks encoded inside
%% (This is because encoding-decoding them are inefficient). TODO:
%% return a structured list of them.
-spec checksum_list(pid(), string()) -> {ok, binary()} | {error, machi_client_error_reason()}.
checksum_list(PidSpec, File) -> checksum_list(PidSpec, File) ->
checksum_list(PidSpec, File, ?DEFAULT_TIMEOUT). checksum_list(PidSpec, File, ?DEFAULT_TIMEOUT).
@ -289,19 +225,19 @@ do_send_sync2({auth, User, Pass}, #state{sock=Sock}=S) ->
Res = {bummer, {X, Y, erlang:get_stacktrace()}}, Res = {bummer, {X, Y, erlang:get_stacktrace()}},
{Res, S} {Res, S}
end; end;
do_send_sync2({append_chunk, NS, Prefix, Chunk, CSum, Opts}, do_send_sync2({append_chunk, PlacementKey, Prefix, Chunk, CSum, ChunkExtra},
#state{sock=Sock, sock_id=Index, count=Count}=S) -> #state{sock=Sock, sock_id=Index, count=Count}=S) ->
try try
ReqID = <<Index:64/big, Count:64/big>>, ReqID = <<Index:64/big, Count:64/big>>,
PK = if PlacementKey == <<>> -> undefined;
true -> PlacementKey
end,
CSumT = convert_csum_req(CSum, Chunk), CSumT = convert_csum_req(CSum, Chunk),
{ChunkExtra, Pref, FailPref} = machi_pb_translate:conv_from_append_opts(Opts), Req = #mpb_appendchunkreq{placement_key=PK,
Req = #mpb_appendchunkreq{namespace=NS,
prefix=Prefix, prefix=Prefix,
chunk=Chunk, chunk=Chunk,
csum=CSumT, csum=CSumT,
chunk_extra=ChunkExtra, chunk_extra=ChunkExtra},
preferred_file_name=Pref,
flag_fail_preferred=FailPref},
R1a = #mpb_request{req_id=ReqID, do_not_alter=1, R1a = #mpb_request{req_id=ReqID, do_not_alter=1,
append_chunk=Req}, append_chunk=Req},
Bin1a = machi_pb:encode_mpb_request(R1a), Bin1a = machi_pb:encode_mpb_request(R1a),
@ -324,11 +260,10 @@ do_send_sync2({write_chunk, File, Offset, Chunk, CSum},
try try
ReqID = <<Index:64/big, Count:64/big>>, ReqID = <<Index:64/big, Count:64/big>>,
CSumT = convert_csum_req(CSum, Chunk), CSumT = convert_csum_req(CSum, Chunk),
Req = #mpb_writechunkreq{chunk= Req = #mpb_writechunkreq{file=File,
#mpb_chunk{chunk=Chunk,
file_name=File,
offset=Offset, offset=Offset,
csum=CSumT}}, chunk=Chunk,
csum=CSumT},
R1a = #mpb_request{req_id=ReqID, do_not_alter=1, R1a = #mpb_request{req_id=ReqID, do_not_alter=1,
write_chunk=Req}, write_chunk=Req},
Bin1a = machi_pb:encode_mpb_request(R1a), Bin1a = machi_pb:encode_mpb_request(R1a),
@ -346,19 +281,13 @@ do_send_sync2({write_chunk, File, Offset, Chunk, CSum},
Res = {bummer, {X, Y, erlang:get_stacktrace()}}, Res = {bummer, {X, Y, erlang:get_stacktrace()}},
{Res, S#state{count=Count+1}} {Res, S#state{count=Count+1}}
end; end;
do_send_sync2({read_chunk, File, Offset, Size, Opts}, do_send_sync2({read_chunk, File, Offset, Size},
#state{sock=Sock, sock_id=Index, count=Count}=S) -> #state{sock=Sock, sock_id=Index, count=Count}=S) ->
try try
ReqID = <<Index:64/big, Count:64/big>>, ReqID = <<Index:64/big, Count:64/big>>,
#read_opts{no_checksum=FlagNoChecksum, Req = #mpb_readchunkreq{file=File,
no_chunk=FlagNoChunk,
needs_trimmed=NeedsTrimmed} = Opts,
Req = #mpb_readchunkreq{chunk_pos=#mpb_chunkpos{file_name=File,
offset=Offset, offset=Offset,
chunk_size=Size}, size=Size},
flag_no_checksum=machi_util:bool2int(FlagNoChecksum),
flag_no_chunk=machi_util:bool2int(FlagNoChunk),
flag_needs_trimmed=machi_util:bool2int(NeedsTrimmed)},
R1a = #mpb_request{req_id=ReqID, do_not_alter=1, R1a = #mpb_request{req_id=ReqID, do_not_alter=1,
read_chunk=Req}, read_chunk=Req},
Bin1a = machi_pb:encode_mpb_request(R1a), Bin1a = machi_pb:encode_mpb_request(R1a),
@ -380,9 +309,9 @@ do_send_sync2({trim_chunk, File, Offset, Size},
#state{sock=Sock, sock_id=Index, count=Count}=S) -> #state{sock=Sock, sock_id=Index, count=Count}=S) ->
try try
ReqID = <<Index:64/big, Count:64/big>>, ReqID = <<Index:64/big, Count:64/big>>,
Req = #mpb_trimchunkreq{chunk_pos=#mpb_chunkpos{file_name=File, Req = #mpb_trimchunkreq{file=File,
offset=Offset, offset=Offset,
chunk_size=Size}}, size=Size},
R1a = #mpb_request{req_id=ReqID, do_not_alter=1, R1a = #mpb_request{req_id=ReqID, do_not_alter=1,
trim_chunk=Req}, trim_chunk=Req},
Bin1a = machi_pb:encode_mpb_request(R1a), Bin1a = machi_pb:encode_mpb_request(R1a),
@ -445,15 +374,9 @@ do_send_sync2({list_files},
{Res, S#state{count=Count+1}} {Res, S#state{count=Count+1}}
end. end.
%% We only convert the checksum types that make sense here:
%% none or client_sha. None of the other types should be sent
%% to us via the PB high protocol.
convert_csum_req(none, Chunk) -> convert_csum_req(none, Chunk) ->
#mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA', #mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA',
csum=machi_util:checksum_chunk(Chunk)}; csum=machi_util:checksum_chunk(Chunk)};
convert_csum_req(<<>>, Chunk) ->
convert_csum_req(none, Chunk);
convert_csum_req({client_sha, CSumBin}, _Chunk) -> convert_csum_req({client_sha, CSumBin}, _Chunk) ->
#mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA', #mpb_chunkcsum{type='CSUM_TAG_CLIENT_SHA',
csum=CSumBin}. csum=CSumBin}.
@ -478,8 +401,6 @@ convert_general_status_code('NOT_WRITTEN') ->
{error, not_written}; {error, not_written};
convert_general_status_code('WRITTEN') -> convert_general_status_code('WRITTEN') ->
{error, written}; {error, written};
convert_general_status_code('TRIMMED') ->
{error, trimmed};
convert_general_status_code('NO_SUCH_FILE') -> convert_general_status_code('NO_SUCH_FILE') ->
{error, no_such_file}; {error, no_such_file};
convert_general_status_code('PARTIAL_READ') -> convert_general_status_code('PARTIAL_READ') ->
@ -494,21 +415,8 @@ convert_write_chunk_resp(#mpb_writechunkresp{status='OK'}) ->
convert_write_chunk_resp(#mpb_writechunkresp{status=Status}) -> convert_write_chunk_resp(#mpb_writechunkresp{status=Status}) ->
convert_general_status_code(Status). convert_general_status_code(Status).
convert_read_chunk_resp(#mpb_readchunkresp{status='OK', chunks=PB_Chunks, trimmed=PB_Trimmed}) -> convert_read_chunk_resp(#mpb_readchunkresp{status='OK', chunk=Chunk}) ->
Chunks = lists:map(fun(#mpb_chunk{offset=Offset, {ok, Chunk};
file_name=File,
chunk=Chunk,
csum=#mpb_chunkcsum{type=T, csum=Ck}}) ->
%% TODO: cleanup export
Csum = <<(machi_pb_translate:conv_to_csum_tag(T)):8, Ck/binary>>,
{list_to_binary(File), Offset, Chunk, Csum}
end, PB_Chunks),
Trimmed = lists:map(fun(#mpb_chunkpos{file_name=File,
offset=Offset,
chunk_size=Size}) ->
{list_to_binary(File), Offset, Size}
end, PB_Trimmed),
{ok, {Chunks, Trimmed}};
convert_read_chunk_resp(#mpb_readchunkresp{status=Status}) -> convert_read_chunk_resp(#mpb_readchunkresp{status=Status}) ->
convert_general_status_code(Status). convert_general_status_code(Status).

View file

@ -15,8 +15,8 @@
%% KIND, either express or implied. See the License for the %% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations %% specific language governing permissions and limitations
%% under the License. %% under the License.
%% -------------------------------------------------------------------
%% %%
%% -------------------------------------------------------------------
-module(machi_pb_translate). -module(machi_pb_translate).
@ -34,115 +34,85 @@
-export([from_pb_request/1, -export([from_pb_request/1,
from_pb_response/1, from_pb_response/1,
to_pb_request/2, to_pb_request/2,
to_pb_response/3, to_pb_response/3
conv_from_append_opts/1,
conv_to_append_opts/1
]). ]).
%% TODO: fixme cleanup
-export([conv_to_csum_tag/1]).
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
echo=#mpb_echoreq{message=Msg}}) -> echo=#mpb_echoreq{message=Msg}}) ->
{ReqID, {low_skip_wedge, {low_echo, Msg}}}; {ReqID, {low_echo, undefined, Msg}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
auth=#mpb_authreq{user=User, password=Pass}}) -> auth=#mpb_authreq{user=User, password=Pass}}) ->
{ReqID, {low_skip_wedge, {low_auth, User, Pass}}}; {ReqID, {low_auth, undefined, User, Pass}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
append_chunk=IR=#mpb_ll_appendchunkreq{ append_chunk=#mpb_ll_appendchunkreq{
namespace_version=NSVersion,
namespace=NS_str,
locator=NSLocator,
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
placement_key=PKey,
prefix=Prefix, prefix=Prefix,
chunk=Chunk, chunk=Chunk,
csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}) -> csum=#mpb_chunkcsum{type=CSum_type, csum=CSum},
NS = list_to_binary(NS_str), chunk_extra=ChunkExtra}}) ->
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
CSum_tag = conv_to_csum_tag(CSum_type), CSum_tag = conv_to_csum_tag(CSum_type),
Opts = conv_to_append_opts(IR), {ReqID, {low_append_chunk, EpochID, PKey, Prefix, Chunk, CSum_tag, CSum,
%% NOTE: The tuple position of NSLocator is a bit odd, because EpochID ChunkExtra}};
%% _must_ be in the 4th position (as NSV & NS must be in 2nd & 3rd).
{ReqID, {low_append_chunk, NSVersion, NS, EpochID, NSLocator,
Prefix, Chunk, CSum_tag, CSum, Opts}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
write_chunk=#mpb_ll_writechunkreq{ write_chunk=#mpb_ll_writechunkreq{
namespace_version=NSVersion,
namespace=NS_str,
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
chunk=#mpb_chunk{file_name=File, file=File,
offset=Offset, offset=Offset,
chunk=Chunk, chunk=Chunk,
csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}}) -> csum=#mpb_chunkcsum{type=CSum_type, csum=CSum}}}) ->
NS = list_to_binary(NS_str),
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
CSum_tag = conv_to_csum_tag(CSum_type), CSum_tag = conv_to_csum_tag(CSum_type),
{ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}}; {ReqID, {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
read_chunk=#mpb_ll_readchunkreq{ read_chunk=#mpb_ll_readchunkreq{
namespace_version=NSVersion,
namespace=NS_str,
epoch_id=PB_EpochID,
chunk_pos=ChunkPos,
flag_no_checksum=PB_GetNoChecksum,
flag_no_chunk=PB_GetNoChunk,
flag_needs_trimmed=PB_NeedsTrimmed}}) ->
NS = list_to_binary(NS_str),
EpochID = conv_to_epoch_id(PB_EpochID),
Opts = #read_opts{no_checksum=PB_GetNoChecksum,
no_chunk=PB_GetNoChunk,
needs_trimmed=PB_NeedsTrimmed},
#mpb_chunkpos{file_name=File,
offset=Offset,
chunk_size=Size} = ChunkPos,
{ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}};
from_pb_request(#mpb_ll_request{
req_id=ReqID,
trim_chunk=#mpb_ll_trimchunkreq{
namespace_version=NSVersion,
namespace=NS_str,
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
file=File, file=File,
offset=Offset, offset=Offset,
size=Size, size=Size,
trigger_gc=TriggerGC}}) -> flag_no_checksum=PB_GetNoChecksum,
NS = list_to_binary(NS_str), flag_no_chunk=PB_GetNoChunk}}) ->
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
{ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}}; Opts = [{no_checksum, conv_to_boolean(PB_GetNoChecksum)},
{no_chunk, conv_to_boolean(PB_GetNoChunk)}],
{ReqID, {low_read_chunk, EpochID, File, Offset, Size, Opts}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
checksum_list=#mpb_ll_checksumlistreq{ checksum_list=#mpb_ll_checksumlistreq{
epoch_id=PB_EpochID,
file=File}}) -> file=File}}) ->
{ReqID, {low_skip_wedge, {low_checksum_list, File}}}; EpochID = conv_to_epoch_id(PB_EpochID),
{ReqID, {low_checksum_list, EpochID, File}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
list_files=#mpb_ll_listfilesreq{ list_files=#mpb_ll_listfilesreq{
epoch_id=PB_EpochID}}) -> epoch_id=PB_EpochID}}) ->
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
{ReqID, {low_skip_wedge, {low_list_files, EpochID}}}; {ReqID, {low_list_files, EpochID}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
wedge_status=#mpb_ll_wedgestatusreq{}}) -> wedge_status=#mpb_ll_wedgestatusreq{}}) ->
{ReqID, {low_skip_wedge, {low_wedge_status}}}; {ReqID, {low_wedge_status, undefined}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
delete_migration=#mpb_ll_deletemigrationreq{ delete_migration=#mpb_ll_deletemigrationreq{
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
file=File}}) -> file=File}}) ->
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
{ReqID, {low_skip_wedge, {low_delete_migration, EpochID, File}}}; {ReqID, {low_delete_migration, EpochID, File}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
trunc_hack=#mpb_ll_trunchackreq{ trunc_hack=#mpb_ll_trunchackreq{
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
file=File}}) -> file=File}}) ->
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
{ReqID, {low_skip_wedge, {low_trunc_hack, EpochID, File}}}; {ReqID, {low_trunc_hack, EpochID, File}};
from_pb_request(#mpb_ll_request{ from_pb_request(#mpb_ll_request{
req_id=ReqID, req_id=ReqID,
proj_gl=#mpb_ll_getlatestepochidreq{type=ProjType}}) -> proj_gl=#mpb_ll_getlatestepochidreq{type=ProjType}}) ->
@ -183,39 +153,33 @@ from_pb_request(#mpb_request{req_id=ReqID,
{ReqID, {high_auth, User, Pass}}; {ReqID, {high_auth, User, Pass}};
from_pb_request(#mpb_request{req_id=ReqID, from_pb_request(#mpb_request{req_id=ReqID,
append_chunk=IR=#mpb_appendchunkreq{}}) -> append_chunk=IR=#mpb_appendchunkreq{}}) ->
#mpb_appendchunkreq{namespace=NS_str, #mpb_appendchunkreq{placement_key=__todoPK,
prefix=Prefix, prefix=Prefix,
chunk=Chunk, chunk=Chunk,
csum=CSum} = IR, csum=CSum,
NS = list_to_binary(NS_str), chunk_extra=ChunkExtra} = IR,
TaggedCSum = make_tagged_csum(CSum, Chunk), TaggedCSum = make_tagged_csum(CSum, Chunk),
Opts = conv_to_append_opts(IR), {ReqID, {high_append_chunk, __todoPK, Prefix, Chunk, TaggedCSum,
{ReqID, {high_append_chunk, NS, Prefix, Chunk, TaggedCSum, Opts}}; ChunkExtra}};
from_pb_request(#mpb_request{req_id=ReqID, from_pb_request(#mpb_request{req_id=ReqID,
write_chunk=IR=#mpb_writechunkreq{}}) -> write_chunk=IR=#mpb_writechunkreq{}}) ->
#mpb_writechunkreq{chunk=#mpb_chunk{file_name=File, #mpb_writechunkreq{file=File,
offset=Offset, offset=Offset,
chunk=Chunk, chunk=Chunk,
csum=CSumRec}} = IR, csum=CSum} = IR,
CSum = make_tagged_csum(CSumRec, Chunk), TaggedCSum = make_tagged_csum(CSum, Chunk),
{ReqID, {high_write_chunk, File, Offset, Chunk, CSum}}; {ReqID, {high_write_chunk, File, Offset, Chunk, TaggedCSum}};
from_pb_request(#mpb_request{req_id=ReqID, from_pb_request(#mpb_request{req_id=ReqID,
read_chunk=IR=#mpb_readchunkreq{}}) -> read_chunk=IR=#mpb_readchunkreq{}}) ->
#mpb_readchunkreq{chunk_pos=#mpb_chunkpos{file_name=File, #mpb_readchunkreq{file=File,
offset=Offset, offset=Offset,
chunk_size=Size}, size=Size} = IR,
flag_no_checksum=FlagNoChecksum, {ReqID, {high_read_chunk, File, Offset, Size}};
flag_no_chunk=FlagNoChunk,
flag_needs_trimmed=NeedsTrimmed} = IR,
Opts = #read_opts{no_checksum=FlagNoChecksum,
no_chunk=FlagNoChunk,
needs_trimmed=NeedsTrimmed},
{ReqID, {high_read_chunk, File, Offset, Size, Opts}};
from_pb_request(#mpb_request{req_id=ReqID, from_pb_request(#mpb_request{req_id=ReqID,
trim_chunk=IR=#mpb_trimchunkreq{}}) -> trim_chunk=IR=#mpb_trimchunkreq{}}) ->
#mpb_trimchunkreq{chunk_pos=#mpb_chunkpos{file_name=File, #mpb_trimchunkreq{file=File,
offset=Offset, offset=Offset,
chunk_size=Size}} = IR, size=Size} = IR,
{ReqID, {high_trim_chunk, File, Offset, Size}}; {ReqID, {high_trim_chunk, File, Offset, Size}};
from_pb_request(#mpb_request{req_id=ReqID, from_pb_request(#mpb_request{req_id=ReqID,
checksum_list=IR=#mpb_checksumlistreq{}}) -> checksum_list=IR=#mpb_checksumlistreq{}}) ->
@ -265,30 +229,13 @@ from_pb_response(#mpb_ll_response{
from_pb_response(#mpb_ll_response{ from_pb_response(#mpb_ll_response{
req_id=ReqID, req_id=ReqID,
read_chunk=#mpb_ll_readchunkresp{status=Status, read_chunk=#mpb_ll_readchunkresp{status=Status,
chunks=PB_Chunks, chunk=Chunk}}) ->
trimmed=PB_Trimmed}}) ->
case Status of case Status of
'OK' -> 'OK' ->
Chunks = lists:map(fun(#mpb_chunk{file_name=File, {ReqID, {ok, Chunk}};
offset=Offset,
chunk=Bytes,
csum=#mpb_chunkcsum{type=T,csum=Ck}}) ->
Csum = <<(conv_to_csum_tag(T)):8, Ck/binary>>,
{list_to_binary(File), Offset, Bytes, Csum}
end, PB_Chunks),
Trimmed = lists:map(fun(#mpb_chunkpos{file_name=File,
offset=Offset,
chunk_size=Size}) ->
{list_to_binary(File), Offset, Size}
end, PB_Trimmed),
{ReqID, {ok, {Chunks, Trimmed}}};
_ -> _ ->
{ReqID, machi_pb_high_client:convert_general_status_code(Status)} {ReqID, machi_pb_high_client:convert_general_status_code(Status)}
end; end;
from_pb_response(#mpb_ll_response{
req_id=ReqID,
trim_chunk=#mpb_ll_trimchunkresp{status=Status}}) ->
{ReqID, machi_pb_high_client:convert_general_status_code(Status)};
from_pb_response(#mpb_ll_response{ from_pb_response(#mpb_ll_response{
req_id=ReqID, req_id=ReqID,
checksum_list=#mpb_ll_checksumlistresp{ checksum_list=#mpb_ll_checksumlistresp{
@ -315,16 +262,12 @@ from_pb_response(#mpb_ll_response{
from_pb_response(#mpb_ll_response{ from_pb_response(#mpb_ll_response{
req_id=ReqID, req_id=ReqID,
wedge_status=#mpb_ll_wedgestatusresp{ wedge_status=#mpb_ll_wedgestatusresp{
status=Status, epoch_id=PB_EpochID, wedged_flag=PB_Wedged}}) ->
epoch_id=PB_EpochID, wedged_flag=Wedged_p,
namespace_version=NSVersion, namespace=NS_str}}) ->
GeneralStatus = case machi_pb_high_client:convert_general_status_code(Status) of
ok -> ok;
_Else -> {yukky, _Else}
end,
EpochID = conv_to_epoch_id(PB_EpochID), EpochID = conv_to_epoch_id(PB_EpochID),
NS = list_to_binary(NS_str), Wedged_p = if PB_Wedged == 1 -> true;
{ReqID, {GeneralStatus, {Wedged_p, EpochID, NSVersion, NS}}}; PB_Wedged == 0 -> false
end,
{ReqID, {ok, {Wedged_p, EpochID}}};
from_pb_response(#mpb_ll_response{ from_pb_response(#mpb_ll_response{
req_id=ReqID, req_id=ReqID,
delete_migration=#mpb_ll_deletemigrationresp{ delete_migration=#mpb_ll_deletemigrationresp{
@ -390,100 +333,74 @@ from_pb_response(#mpb_ll_response{
'OK' -> 'OK' ->
{ReqID, {ok, Epochs}}; {ReqID, {ok, Epochs}};
_ -> _ ->
{ReqID, machi_pb_high_client:convert_general_status_code(Status)} {ReqID< machi_pb_high_client:convert_general_status_code(Status)}
end. end.
%% No response for proj_kp/kick_projection_reaction %% No response for proj_kp/kick_projection_reaction
%% TODO: move the #mbp_* record making code from %% TODO: move the #mbp_* record making code from
%% machi_pb_high_client:do_send_sync() clauses into to_pb_request(). %% machi_pb_high_client:do_send_sync() clauses into to_pb_request().
to_pb_request(ReqID, {low_skip_wedge, {low_echo, Msg}}) -> to_pb_request(ReqID, {low_echo, _BogusEpochID, Msg}) ->
#mpb_ll_request{ #mpb_ll_request{
req_id=ReqID, do_not_alter=2, req_id=ReqID, do_not_alter=2,
echo=#mpb_echoreq{message=Msg}}; echo=#mpb_echoreq{message=Msg}};
to_pb_request(ReqID, {low_skip_wedge, {low_auth, User, Pass}}) -> to_pb_request(ReqID, {low_auth, _BogusEpochID, User, Pass}) ->
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
auth=#mpb_authreq{user=User, password=Pass}}; auth=#mpb_authreq{user=User, password=Pass}};
%% NOTE: The tuple position of NSLocator is a bit odd, because EpochID to_pb_request(ReqID, {low_append_chunk, EpochID, PKey, Prefix, Chunk,
%% _must_ be in the 4th position (as NSV & NS must be in 2nd & 3rd). CSum_tag, CSum, ChunkExtra}) ->
to_pb_request(ReqID, {low_append_chunk, NSVersion, NS, EpochID, NSLocator,
Prefix, Chunk, CSum_tag, CSum, Opts}) ->
PB_EpochID = conv_from_epoch_id(EpochID), PB_EpochID = conv_from_epoch_id(EpochID),
CSum_type = conv_from_csum_tag(CSum_tag), CSum_type = conv_from_csum_tag(CSum_tag),
PB_CSum = #mpb_chunkcsum{type=CSum_type, csum=CSum}, PB_CSum = #mpb_chunkcsum{type=CSum_type, csum=CSum},
{ChunkExtra, Pref, FailPref} = conv_from_append_opts(Opts),
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
append_chunk=#mpb_ll_appendchunkreq{ append_chunk=#mpb_ll_appendchunkreq{
namespace_version=NSVersion,
namespace=NS,
locator=NSLocator,
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
placement_key=PKey,
prefix=Prefix, prefix=Prefix,
chunk=Chunk, chunk=Chunk,
csum=PB_CSum, csum=PB_CSum,
chunk_extra=ChunkExtra, chunk_extra=ChunkExtra}};
preferred_file_name=Pref, to_pb_request(ReqID, {low_write_chunk, EpochID, File, Offset, Chunk, CSum_tag, CSum}) ->
flag_fail_preferred=FailPref}};
to_pb_request(ReqID, {low_write_chunk, NSVersion, NS, EpochID, File, Offset, Chunk, CSum_tag, CSum}) ->
PB_EpochID = conv_from_epoch_id(EpochID), PB_EpochID = conv_from_epoch_id(EpochID),
CSum_type = conv_from_csum_tag(CSum_tag), CSum_type = conv_from_csum_tag(CSum_tag),
PB_CSum = #mpb_chunkcsum{type=CSum_type, csum=CSum}, PB_CSum = #mpb_chunkcsum{type=CSum_type, csum=CSum},
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
write_chunk=#mpb_ll_writechunkreq{ write_chunk=#mpb_ll_writechunkreq{
namespace_version=NSVersion,
namespace=NS,
epoch_id=PB_EpochID,
chunk=#mpb_chunk{file_name=File,
offset=Offset,
chunk=Chunk,
csum=PB_CSum}}};
to_pb_request(ReqID, {low_read_chunk, NSVersion, NS, EpochID, File, Offset, Size, Opts}) ->
PB_EpochID = conv_from_epoch_id(EpochID),
#read_opts{no_checksum=FNChecksum,
no_chunk=FNChunk,
needs_trimmed=NeedsTrimmed} = Opts,
#mpb_ll_request{
req_id=ReqID, do_not_alter=2,
read_chunk=#mpb_ll_readchunkreq{
namespace_version=NSVersion,
namespace=NS,
epoch_id=PB_EpochID,
chunk_pos=#mpb_chunkpos{
file_name=File,
offset=Offset,
chunk_size=Size},
flag_no_checksum=FNChecksum,
flag_no_chunk=FNChunk,
flag_needs_trimmed=NeedsTrimmed}};
to_pb_request(ReqID, {low_trim_chunk, NSVersion, NS, EpochID, File, Offset, Size, TriggerGC}) ->
PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_request{req_id=ReqID, do_not_alter=2,
trim_chunk=#mpb_ll_trimchunkreq{
namespace_version=NSVersion,
namespace=NS,
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
file=File, file=File,
offset=Offset, offset=Offset,
size=Size, chunk=Chunk,
trigger_gc=TriggerGC}}; csum=PB_CSum}};
to_pb_request(ReqID, {low_skip_wedge, {low_checksum_list, File}}) -> to_pb_request(ReqID, {low_read_chunk, EpochID, File, Offset, Size, _Opts}) ->
%% TODO: stop ignoring Opts ^_^
PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_request{
req_id=ReqID, do_not_alter=2,
read_chunk=#mpb_ll_readchunkreq{
epoch_id=PB_EpochID,
file=File,
offset=Offset,
size=Size}};
to_pb_request(ReqID, {low_checksum_list, EpochID, File}) ->
PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
checksum_list=#mpb_ll_checksumlistreq{ checksum_list=#mpb_ll_checksumlistreq{
epoch_id=PB_EpochID,
file=File}}; file=File}};
to_pb_request(ReqID, {low_skip_wedge, {low_list_files, EpochID}}) -> to_pb_request(ReqID, {low_list_files, EpochID}) ->
PB_EpochID = conv_from_epoch_id(EpochID), PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
list_files=#mpb_ll_listfilesreq{epoch_id=PB_EpochID}}; list_files=#mpb_ll_listfilesreq{epoch_id=PB_EpochID}};
to_pb_request(ReqID, {low_skip_wedge, {low_wedge_status}}) -> to_pb_request(ReqID, {low_wedge_status, _BogusEpochID}) ->
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
wedge_status=#mpb_ll_wedgestatusreq{}}; wedge_status=#mpb_ll_wedgestatusreq{}};
to_pb_request(ReqID, {low_skip_wedge, {low_delete_migration, EpochID, File}}) -> to_pb_request(ReqID, {low_delete_migration, EpochID, File}) ->
PB_EpochID = conv_from_epoch_id(EpochID), PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
delete_migration=#mpb_ll_deletemigrationreq{ delete_migration=#mpb_ll_deletemigrationreq{
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
file=File}}; file=File}};
to_pb_request(ReqID, {low_skip_wedge, {low_trunc_hack, EpochID, File}}) -> to_pb_request(ReqID, {low_trunc_hack, EpochID, File}) ->
PB_EpochID = conv_from_epoch_id(EpochID), PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_request{req_id=ReqID, do_not_alter=2, #mpb_ll_request{req_id=ReqID, do_not_alter=2,
trunc_hack=#mpb_ll_trunchackreq{ trunc_hack=#mpb_ll_trunchackreq{
@ -519,15 +436,15 @@ to_pb_response(_ReqID, _, async_no_response=X) ->
X; X;
to_pb_response(ReqID, _, {low_error, ErrCode, ErrMsg}) -> to_pb_response(ReqID, _, {low_error, ErrCode, ErrMsg}) ->
make_ll_error_resp(ReqID, ErrCode, ErrMsg); make_ll_error_resp(ReqID, ErrCode, ErrMsg);
to_pb_response(ReqID, {low_skip_wedge, {low_echo, _Msg}}, Resp) -> to_pb_response(ReqID, {low_echo, _BogusEpochID, _Msg}, Resp) ->
#mpb_ll_response{ #mpb_ll_response{
req_id=ReqID, req_id=ReqID,
echo=#mpb_echoresp{message=Resp}}; echo=#mpb_echoresp{message=Resp}};
to_pb_response(ReqID, {low_skip_wedge, {low_auth, _, _}}, __TODO_Resp) -> to_pb_response(ReqID, {low_auth, _, _, _}, __TODO_Resp) ->
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
generic=#mpb_errorresp{code=1, generic=#mpb_errorresp{code=1,
msg="AUTH not implemented"}}; msg="AUTH not implemented"}};
to_pb_response(ReqID, {low_append_chunk, _NSV, _NS, _EID, _NSL, _Pfx, _Ch, _CST, _CS, _O}, Resp)-> to_pb_response(ReqID, {low_append_chunk, _EID, _PKey, _Pfx, _Ch, _CST, _CS, _CE}, Resp)->
case Resp of case Resp of
{ok, {Offset, Size, File}} -> {ok, {Offset, Size, File}} ->
Where = #mpb_chunkpos{offset=Offset, Where = #mpb_chunkpos{offset=Offset,
@ -543,30 +460,18 @@ to_pb_response(ReqID, {low_append_chunk, _NSV, _NS, _EID, _NSL, _Pfx, _Ch, _CST,
_Else -> _Else ->
make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end; end;
to_pb_response(ReqID, {low_write_chunk, _NSV, _NS, _EID, _Fl, _Off, _Ch, _CST, _CS},Resp)-> to_pb_response(ReqID, {low_write_chunk, _EID, _Fl, _Off, _Ch, _CST, _CS},Resp)->
Status = conv_from_status(Resp), Status = conv_from_status(Resp),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
write_chunk=#mpb_ll_writechunkresp{status=Status}}; write_chunk=#mpb_ll_writechunkresp{status=Status}};
to_pb_response(ReqID, {low_read_chunk, _NSV, _NS, _EID, _Fl, _Off, _Sz, _Opts}, Resp)-> to_pb_response(ReqID, {low_read_chunk, _EID, _Fl, _Off, _Sz, _Opts}, Resp)->
case Resp of case Resp of
{ok, {Chunks, Trimmed}} -> {ok, Chunk} ->
PB_Chunks = lists:map(fun({File, Offset, Bytes, Csum}) -> CSum = undefined, % TODO not implemented
{Tag, Ck} = machi_util:unmake_tagged_csum(Csum),
#mpb_chunk{file_name=File,
offset=Offset,
chunk=Bytes,
csum=#mpb_chunkcsum{type=conv_from_csum_tag(Tag),
csum=Ck}}
end, Chunks),
PB_Trimmed = lists:map(fun({File, Offset, Size}) ->
#mpb_chunkpos{file_name=File,
offset=Offset,
chunk_size=Size}
end, Trimmed),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
read_chunk=#mpb_ll_readchunkresp{status='OK', read_chunk=#mpb_ll_readchunkresp{status='OK',
chunks=PB_Chunks, chunk=Chunk,
trimmed=PB_Trimmed}}; csum=CSum}};
{error, _}=Error -> {error, _}=Error ->
Status = conv_from_status(Error), Status = conv_from_status(Error),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
@ -574,19 +479,7 @@ to_pb_response(ReqID, {low_read_chunk, _NSV, _NS, _EID, _Fl, _Off, _Sz, _Opts},
_Else -> _Else ->
make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end; end;
to_pb_response(ReqID, {low_trim_chunk, _, _, _, _, _, _, _}, Resp) -> to_pb_response(ReqID, {low_checksum_list, _EpochID, _File}, Resp) ->
case Resp of
ok ->
#mpb_ll_response{req_id=ReqID,
trim_chunk=#mpb_ll_trimchunkresp{status='OK'}};
{error, _}=Error ->
Status = conv_from_status(Error),
#mpb_ll_response{req_id=ReqID,
trim_chunk=#mpb_ll_trimchunkresp{status=Status}};
_Else ->
make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end;
to_pb_response(ReqID, {low_skip_wedge, {low_checksum_list, _File}}, Resp) ->
case Resp of case Resp of
{ok, Chunk} -> {ok, Chunk} ->
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
@ -599,7 +492,7 @@ to_pb_response(ReqID, {low_skip_wedge, {low_checksum_list, _File}}, Resp) ->
_Else -> _Else ->
make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end; end;
to_pb_response(ReqID, {low_skip_wedge, {low_list_files, _EpochID}}, Resp) -> to_pb_response(ReqID, {low_list_files, _EpochID}, Resp) ->
case Resp of case Resp of
{ok, FileInfo} -> {ok, FileInfo} ->
PB_Files = [#mpb_fileinfo{file_size=Size, file_name=Name} || PB_Files = [#mpb_fileinfo{file_size=Size, file_name=Name} ||
@ -614,28 +507,26 @@ to_pb_response(ReqID, {low_skip_wedge, {low_list_files, _EpochID}}, Resp) ->
_Else -> _Else ->
make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) make_ll_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end; end;
to_pb_response(ReqID, {low_skip_wedge, {low_wedge_status}}, Resp) -> to_pb_response(ReqID, {low_wedge_status, _BogusEpochID}, Resp) ->
case Resp of case Resp of
{error, _}=Error -> {error, _}=Error ->
Status = conv_from_status(Error), Status = conv_from_status(Error),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
wedge_status=#mpb_ll_wedgestatusresp{status=Status}}; wedge_status=#mpb_ll_wedgestatusresp{status=Status}};
{Wedged_p, EpochID, NSVersion, NS} -> {Wedged_p, EpochID} ->
PB_Wedged = conv_from_boolean(Wedged_p),
PB_EpochID = conv_from_epoch_id(EpochID), PB_EpochID = conv_from_epoch_id(EpochID),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
wedge_status=#mpb_ll_wedgestatusresp{ wedge_status=#mpb_ll_wedgestatusresp{
status='OK', status='OK',
epoch_id=PB_EpochID, epoch_id=PB_EpochID,
wedged_flag=Wedged_p, wedged_flag=PB_Wedged}}
namespace_version=NSVersion,
namespace=NS
}}
end; end;
to_pb_response(ReqID, {low_skip_wedge, {low_delete_migration, _EID, _Fl}}, Resp)-> to_pb_response(ReqID, {low_delete_migration, _EID, _Fl}, Resp)->
Status = conv_from_status(Resp), Status = conv_from_status(Resp),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
delete_migration=#mpb_ll_deletemigrationresp{status=Status}}; delete_migration=#mpb_ll_deletemigrationresp{status=Status}};
to_pb_response(ReqID, {low_skip_wedge, {low_trunc_hack, _EID, _Fl}}, Resp)-> to_pb_response(ReqID, {low_trunc_hack, _EID, _Fl}, Resp)->
Status = conv_from_status(Resp), Status = conv_from_status(Resp),
#mpb_ll_response{req_id=ReqID, #mpb_ll_response{req_id=ReqID,
trunc_hack=#mpb_ll_trunchackresp{status=Status}}; trunc_hack=#mpb_ll_trunchackresp{status=Status}};
@ -716,7 +607,7 @@ to_pb_response(ReqID, {high_auth, _User, _Pass}, _Resp) ->
#mpb_response{req_id=ReqID, #mpb_response{req_id=ReqID,
generic=#mpb_errorresp{code=1, generic=#mpb_errorresp{code=1,
msg="AUTH not implemented"}}; msg="AUTH not implemented"}};
to_pb_response(ReqID, {high_append_chunk, _NS, _Prefix, _Chunk, _TSum, _O}, Resp)-> to_pb_response(ReqID, {high_append_chunk, _TODO, _Prefix, _Chunk, _TSum, _CE}, Resp)->
case Resp of case Resp of
{ok, {Offset, Size, File}} -> {ok, {Offset, Size, File}} ->
Where = #mpb_chunkpos{offset=Offset, Where = #mpb_chunkpos{offset=Offset,
@ -732,7 +623,7 @@ to_pb_response(ReqID, {high_append_chunk, _NS, _Prefix, _Chunk, _TSum, _O}, Resp
_Else -> _Else ->
make_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) make_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end; end;
to_pb_response(ReqID, {high_write_chunk, _File, _Offset, _Chunk, _CSum}, Resp) -> to_pb_response(ReqID, {high_write_chunk, _File, _Offset, _Chunk, _TaggedCSum}, Resp) ->
case Resp of case Resp of
{ok, {_,_,_}} -> {ok, {_,_,_}} ->
%% machi_cr_client returns ok 2-tuple, convert to simple ok. %% machi_cr_client returns ok 2-tuple, convert to simple ok.
@ -745,26 +636,12 @@ to_pb_response(ReqID, {high_write_chunk, _File, _Offset, _Chunk, _CSum}, Resp) -
_Else -> _Else ->
make_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else])) make_error_resp(ReqID, 66, io_lib:format("err ~p", [_Else]))
end; end;
to_pb_response(ReqID, {high_read_chunk, _File, _Offset, _Size, _}, Resp) -> to_pb_response(ReqID, {high_read_chunk, _File, _Offset, _Size}, Resp) ->
case Resp of case Resp of
{ok, {Chunks, Trimmed}} -> {ok, Chunk} ->
PB_Chunks = lists:map(fun({File, Offset, Bytes, Csum}) ->
{Tag, Ck} = machi_util:unmake_tagged_csum(Csum),
#mpb_chunk{
offset=Offset,
file_name=File,
chunk=Bytes,
csum=#mpb_chunkcsum{type=conv_from_csum_tag(Tag), csum=Ck}}
end, Chunks),
PB_Trimmed = lists:map(fun({File, Offset, Size}) ->
#mpb_chunkpos{file_name=File,
offset=Offset,
chunk_size=Size}
end, Trimmed),
#mpb_response{req_id=ReqID, #mpb_response{req_id=ReqID,
read_chunk=#mpb_readchunkresp{status='OK', read_chunk=#mpb_readchunkresp{status='OK',
chunks=PB_Chunks, chunk=Chunk}};
trimmed=PB_Trimmed}};
{error, _}=Error -> {error, _}=Error ->
Status = conv_from_status(Error), Status = conv_from_status(Error),
#mpb_response{req_id=ReqID, #mpb_response{req_id=ReqID,
@ -840,7 +717,6 @@ conv_to_epoch_id(#mpb_epochid{epoch_number=Epoch,
conv_to_projection_v1(#mpb_projectionv1{epoch_number=Epoch, conv_to_projection_v1(#mpb_projectionv1{epoch_number=Epoch,
epoch_csum=CSum, epoch_csum=CSum,
author_server=Author, author_server=Author,
chain_name=ChainName,
all_members=AllMembers, all_members=AllMembers,
witnesses=Witnesses, witnesses=Witnesses,
creation_time=CTime, creation_time=CTime,
@ -854,7 +730,6 @@ conv_to_projection_v1(#mpb_projectionv1{epoch_number=Epoch,
#projection_v1{epoch_number=Epoch, #projection_v1{epoch_number=Epoch,
epoch_csum=CSum, epoch_csum=CSum,
author_server=to_atom(Author), author_server=to_atom(Author),
chain_name=to_atom(ChainName),
all_members=[to_atom(X) || X <- AllMembers], all_members=[to_atom(X) || X <- AllMembers],
witnesses=[to_atom(X) || X <- Witnesses], witnesses=[to_atom(X) || X <- Witnesses],
creation_time=conv_to_now(CTime), creation_time=conv_to_now(CTime),
@ -975,8 +850,6 @@ conv_from_status({error, not_written}) ->
'NOT_WRITTEN'; 'NOT_WRITTEN';
conv_from_status({error, written}) -> conv_from_status({error, written}) ->
'WRITTEN'; 'WRITTEN';
conv_from_status({error, trimmed}) ->
'TRIMMED';
conv_from_status({error, no_such_file}) -> conv_from_status({error, no_such_file}) ->
'NO_SUCH_FILE'; 'NO_SUCH_FILE';
conv_from_status({error, partial_read}) -> conv_from_status({error, partial_read}) ->
@ -984,34 +857,24 @@ conv_from_status({error, partial_read}) ->
conv_from_status({error, bad_epoch}) -> conv_from_status({error, bad_epoch}) ->
'BAD_EPOCH'; 'BAD_EPOCH';
conv_from_status(_OOPS) -> conv_from_status(_OOPS) ->
io:format(user, "HEY, ~s:~w got ~p\n", [?MODULE, ?LINE, _OOPS]), io:format(user, "HEY, ~s:~w got ~w\n", [?MODULE, ?LINE, _OOPS]),
'BAD_JOSS'. 'BAD_JOSS'.
conv_from_append_opts(#append_opts{chunk_extra=ChunkExtra, conv_to_boolean(undefined) ->
preferred_file_name=Pref, false;
flag_fail_preferred=FailPref}) -> conv_to_boolean(0) ->
{ChunkExtra, Pref, FailPref}. false;
conv_to_boolean(N) when is_integer(N) ->
true.
conv_from_boolean(false) ->
conv_to_append_opts(#mpb_appendchunkreq{ 0;
chunk_extra=ChunkExtra, conv_from_boolean(true) ->
preferred_file_name=Pref, 1.
flag_fail_preferred=FailPref}) ->
#append_opts{chunk_extra=ChunkExtra,
preferred_file_name=Pref,
flag_fail_preferred=FailPref};
conv_to_append_opts(#mpb_ll_appendchunkreq{
chunk_extra=ChunkExtra,
preferred_file_name=Pref,
flag_fail_preferred=FailPref}) ->
#append_opts{chunk_extra=ChunkExtra,
preferred_file_name=Pref,
flag_fail_preferred=FailPref}.
conv_from_projection_v1(#projection_v1{epoch_number=Epoch, conv_from_projection_v1(#projection_v1{epoch_number=Epoch,
epoch_csum=CSum, epoch_csum=CSum,
author_server=Author, author_server=Author,
chain_name=ChainName,
all_members=AllMembers, all_members=AllMembers,
witnesses=Witnesses, witnesses=Witnesses,
creation_time=CTime, creation_time=CTime,
@ -1025,7 +888,6 @@ conv_from_projection_v1(#projection_v1{epoch_number=Epoch,
#mpb_projectionv1{epoch_number=Epoch, #mpb_projectionv1{epoch_number=Epoch,
epoch_csum=CSum, epoch_csum=CSum,
author_server=to_list(Author), author_server=to_list(Author),
chain_name=to_list(ChainName),
all_members=[to_list(X) || X <- AllMembers], all_members=[to_list(X) || X <- AllMembers],
witnesses=[to_list(X) || X <- Witnesses], witnesses=[to_list(X) || X <- Witnesses],
creation_time=conv_from_now(CTime), creation_time=conv_from_now(CTime),

View file

@ -1,89 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2016 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
-module(machi_plist).
%%% @doc persistent list of binaries
-export([open/2, close/1, find/2, add/2]).
-ifdef(TEST).
-export([all/1]).
-endif.
-record(machi_plist,
{filename :: file:filename_all(),
fd :: file:io_device(),
list = [] :: list(string)}).
-type plist() :: #machi_plist{}.
-export_type([plist/0]).
-spec open(file:filename_all(), proplists:proplist()) ->
{ok, plist()} | {error, file:posix()}.
open(Filename, _Opt) ->
%% TODO: This decode could fail if the file didn't finish writing
%% whole contents, which should be fixed by some persistent
%% solution.
List = case file:read_file(Filename) of
{ok, <<>>} -> [];
{ok, Bin} -> binary_to_term(Bin);
{error, enoent} -> []
end,
case file:open(Filename, [read, write, raw, binary, sync]) of
{ok, Fd} ->
{ok, #machi_plist{filename=Filename,
fd=Fd,
list=List}};
Error ->
Error
end.
-spec close(plist()) -> ok.
close(#machi_plist{fd=Fd}) ->
_ = file:close(Fd).
-spec find(plist(), string()) -> boolean().
find(#machi_plist{list=List}, Name) ->
lists:member(Name, List).
-spec add(plist(), string()) -> {ok, plist()} | {error, file:posix()}.
add(Plist = #machi_plist{list=List0, fd=Fd}, Name) ->
case find(Plist, Name) of
true ->
{ok, Plist};
false ->
List = lists:append(List0, [Name]),
%% TODO: partial write could break the file with other
%% persistent info (even lose data of trimmed states);
%% needs a solution.
case file:pwrite(Fd, 0, term_to_binary(List)) of
ok ->
{ok, Plist#machi_plist{list=List}};
Error ->
Error
end
end.
-ifdef(TEST).
-spec all(plist()) -> [file:filename()].
all(#machi_plist{list=List}) ->
List.
-endif.

View file

@ -174,7 +174,6 @@ make_summary(#projection_v1{epoch_number=EpochNum,
repairing=Repairing_list, repairing=Repairing_list,
dbg=Dbg, dbg2=Dbg2}) -> dbg=Dbg, dbg2=Dbg2}) ->
[{epoch,EpochNum}, {csum,_CSum4}, [{epoch,EpochNum}, {csum,_CSum4},
{all, _All_list},
{author,Author}, {mode,CMode},{witnesses, Witness_list}, {author,Author}, {mode,CMode},{witnesses, Witness_list},
{upi,UPI_list},{repair,Repairing_list},{down,Down_list}] ++ {upi,UPI_list},{repair,Repairing_list},{down,Down_list}] ++
[{d,Dbg}, {d2,Dbg2}]. [{d,Dbg}, {d2,Dbg2}].

View file

@ -321,7 +321,7 @@ do_proj_write3(ProjType, #projection_v1{epoch_number=Epoch,
end. end.
do_proj_write4(ProjType, Proj, Path, Epoch, #state{consistency_mode=CMode}=S) -> do_proj_write4(ProjType, Proj, Path, Epoch, #state{consistency_mode=CMode}=S) ->
{{ok, FH}, Epoch, Path} = {file:open(Path, [write, raw, binary]), Epoch, Path}, {ok, FH} = file:open(Path, [write, raw, binary]),
ok = file:write(FH, term_to_binary(Proj)), ok = file:write(FH, term_to_binary(Proj)),
ok = file:sync(FH), ok = file:sync(FH),
ok = file:close(FH), ok = file:close(FH),
@ -387,6 +387,7 @@ wait_for_liveness(PidSpec, StartTime, WaitTime) ->
undefined -> undefined ->
case timer:now_diff(os:timestamp(), StartTime) div 1000 of case timer:now_diff(os:timestamp(), StartTime) div 1000 of
X when X < WaitTime -> X when X < WaitTime ->
io:format(user, "\nYOO ~p ~p\n", [PidSpec, lists:sort(registered())]),
timer:sleep(1), timer:sleep(1),
wait_for_liveness(PidSpec, StartTime, WaitTime) wait_for_liveness(PidSpec, StartTime, WaitTime)
end; end;

View file

@ -22,10 +22,6 @@
%% proxy-process style API for hiding messy details such as TCP %% proxy-process style API for hiding messy details such as TCP
%% connection/disconnection with the remote Machi server. %% connection/disconnection with the remote Machi server.
%% %%
%% Please see {@link machi_flu1_client} the "Client API implemntation notes"
%% section for how this module relates to the rest of the client API
%% implementation.
%%
%% Machi is intentionally avoiding using distributed Erlang for %% Machi is intentionally avoiding using distributed Erlang for
%% Machi's communication. This design decision makes Erlang-side code %% Machi's communication. This design decision makes Erlang-side code
%% more difficult &amp; complex, but it's the price to pay for some %% more difficult &amp; complex, but it's the price to pay for some
@ -61,9 +57,10 @@
%% FLU1 API %% FLU1 API
-export([ -export([
%% File API %% File API
append_chunk/6, append_chunk/8, append_chunk/4, append_chunk/5,
read_chunk/7, read_chunk/8, append_chunk_extra/5, append_chunk_extra/6,
checksum_list/2, checksum_list/3, read_chunk/5, read_chunk/6,
checksum_list/3, checksum_list/4,
list_files/2, list_files/3, list_files/2, list_files/3,
wedge_status/1, wedge_status/2, wedge_status/1, wedge_status/2,
@ -81,8 +78,7 @@
quit/1, quit/1,
%% Internal API %% Internal API
write_chunk/7, write_chunk/8, write_chunk/5, write_chunk/6,
trim_chunk/6, trim_chunk/7,
%% Helpers %% Helpers
stop_proxies/1, start_proxies/1 stop_proxies/1, start_proxies/1
@ -107,39 +103,51 @@ start_link(#p_srvr{}=I) ->
%% @doc Append a chunk (binary- or iolist-style) of data to a file %% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix'. %% with `Prefix'.
append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum) -> append_chunk(PidSpec, EpochID, Prefix, Chunk) ->
append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum, append_chunk(PidSpec, EpochID, Prefix, Chunk, infinity).
#append_opts{}, infinity).
%% @doc Append a chunk (binary- or iolist-style) of data to a file %% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix'. %% with `Prefix'.
append_chunk(PidSpec, NSInfo, EpochID, Prefix, Chunk, CSum, Opts, append_chunk(PidSpec, EpochID, Prefix, Chunk, Timeout) ->
Timeout) -> gen_server:call(PidSpec, {req, {append_chunk, EpochID, Prefix, Chunk}},
gen_server:call(PidSpec, {req, {append_chunk, NSInfo, EpochID, Timeout).
Prefix, Chunk, CSum, Opts, Timeout}},
%% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix'.
append_chunk_extra(PidSpec, EpochID, Prefix, Chunk, ChunkExtra)
when is_integer(ChunkExtra), ChunkExtra >= 0 ->
append_chunk_extra(PidSpec, EpochID, Prefix, Chunk, ChunkExtra, infinity).
%% @doc Append a chunk (binary- or iolist-style) of data to a file
%% with `Prefix'.
append_chunk_extra(PidSpec, EpochID, Prefix, Chunk, ChunkExtra, Timeout) ->
gen_server:call(PidSpec, {req, {append_chunk_extra, EpochID, Prefix,
Chunk, ChunkExtra}},
Timeout). Timeout).
%% @doc Read a chunk of data of size `Size' from `File' at `Offset'. %% @doc Read a chunk of data of size `Size' from `File' at `Offset'.
read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts) -> read_chunk(PidSpec, EpochID, File, Offset, Size) ->
read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts, infinity). read_chunk(PidSpec, EpochID, File, Offset, Size, infinity).
%% @doc Read a chunk of data of size `Size' from `File' at `Offset'. %% @doc Read a chunk of data of size `Size' from `File' at `Offset'.
read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, Opts, Timeout) -> read_chunk(PidSpec, EpochID, File, Offset, Size, Timeout) ->
gen_server:call(PidSpec, {req, {read_chunk, NSInfo, EpochID, File, Offset, Size, Opts}}, gen_server:call(PidSpec, {req, {read_chunk, EpochID, File, Offset, Size}},
Timeout). Timeout).
%% @doc Fetch the list of chunk checksums for `File'. %% @doc Fetch the list of chunk checksums for `File'.
checksum_list(PidSpec, File) -> checksum_list(PidSpec, EpochID, File) ->
checksum_list(PidSpec, File, infinity). checksum_list(PidSpec, EpochID, File, infinity).
%% @doc Fetch the list of chunk checksums for `File'. %% @doc Fetch the list of chunk checksums for `File'.
checksum_list(PidSpec, File, Timeout) -> checksum_list(PidSpec, EpochID, File, Timeout) ->
gen_server:call(PidSpec, {req, {checksum_list, File}}, gen_server:call(PidSpec, {req, {checksum_list, EpochID, File}},
Timeout). Timeout).
%% @doc Fetch the list of all files on the remote FLU. %% @doc Fetch the list of all files on the remote FLU.
@ -280,19 +288,19 @@ quit(PidSpec) ->
%% @doc Write a chunk (binary- or iolist-style) of data to a file %% @doc Write a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' at `Offset'. %% with `Prefix' at `Offset'.
write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum) -> write_chunk(PidSpec, EpochID, File, Offset, Chunk) ->
write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum, infinity). write_chunk(PidSpec, EpochID, File, Offset, Chunk, infinity).
%% @doc Write a chunk (binary- or iolist-style) of data to a file %% @doc Write a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' at `Offset'. %% with `Prefix' at `Offset'.
write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum, Timeout) -> write_chunk(PidSpec, EpochID, File, Offset, Chunk, Timeout) ->
case gen_server:call(PidSpec, {req, {write_chunk, NSInfo, EpochID, File, Offset, Chunk, CSum}}, case gen_server:call(PidSpec, {req, {write_chunk, EpochID, File, Offset, Chunk}},
Timeout) of Timeout) of
{error, written}=Err -> {error, written}=Err ->
Size = byte_size(Chunk), Size = byte_size(Chunk),
case read_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, undefined, Timeout) of case read_chunk(PidSpec, EpochID, File, Offset, Size, Timeout) of
{ok, {[{File, Offset, Chunk2, _}], []}} when Chunk2 == Chunk -> {ok, Chunk2} when Chunk2 == Chunk ->
%% See equivalent comment inside write_projection(). %% See equivalent comment inside write_projection().
ok; ok;
_ -> _ ->
@ -302,18 +310,6 @@ write_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, CSum, Timeout) ->
Else Else
end. end.
trim_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size) ->
trim_chunk(PidSpec, NSInfo, EpochID, File, Offset, Size, infinity).
%% @doc Write a chunk (binary- or iolist-style) of data to a file
%% with `Prefix' at `Offset'.
trim_chunk(PidSpec, NSInfo, EpochID, File, Offset, Chunk, Timeout) ->
gen_server:call(PidSpec,
{req, {trim_chunk, NSInfo, EpochID, File, Offset, Chunk}},
Timeout).
%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%
init([I]) -> init([I]) ->
@ -375,24 +371,21 @@ do_req_retry(_Req, 2, Err, S) ->
do_req_retry(Req, Depth, _Err, S) -> do_req_retry(Req, Depth, _Err, S) ->
do_req(Req, Depth + 1, try_connect(disconnect(S))). do_req(Req, Depth + 1, try_connect(disconnect(S))).
make_req_fun({append_chunk, NSInfo, EpochID, make_req_fun({append_chunk, EpochID, Prefix, Chunk},
Prefix, Chunk, CSum, Opts, Timeout},
#state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) ->
fun() -> Mod:append_chunk(Sock, NSInfo, EpochID, fun() -> Mod:append_chunk(Sock, EpochID, Prefix, Chunk) end;
Prefix, Chunk, CSum, Opts, Timeout) make_req_fun({append_chunk_extra, EpochID, Prefix, Chunk, ChunkExtra},
end;
make_req_fun({read_chunk, NSInfo, EpochID, File, Offset, Size, Opts},
#state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) ->
fun() -> Mod:read_chunk(Sock, NSInfo, EpochID, File, Offset, Size, Opts) end; fun() -> Mod:append_chunk_extra(Sock, EpochID, Prefix, Chunk, ChunkExtra) end;
make_req_fun({write_chunk, NSInfo, EpochID, File, Offset, Chunk, CSum}, make_req_fun({read_chunk, EpochID, File, Offset, Size},
#state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) ->
fun() -> Mod:write_chunk(Sock, NSInfo, EpochID, File, Offset, Chunk, CSum) end; fun() -> Mod:read_chunk(Sock, EpochID, File, Offset, Size) end;
make_req_fun({trim_chunk, NSInfo, EpochID, File, Offset, Size}, make_req_fun({write_chunk, EpochID, File, Offset, Chunk},
#state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) ->
fun() -> Mod:trim_chunk(Sock, NSInfo, EpochID, File, Offset, Size) end; fun() -> Mod:write_chunk(Sock, EpochID, File, Offset, Chunk) end;
make_req_fun({checksum_list, File}, make_req_fun({checksum_list, EpochID, File},
#state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) ->
fun() -> Mod:checksum_list(Sock, File) end; fun() -> Mod:checksum_list(Sock, EpochID, File) end;
make_req_fun({list_files, EpochID}, make_req_fun({list_files, EpochID},
#state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) -> #state{sock=Sock,i=#p_srvr{proto_mod=Mod}}) ->
fun() -> Mod:list_files(Sock, EpochID) end; fun() -> Mod:list_files(Sock, EpochID) end;

194
src/machi_sequencer.erl Normal file
View file

@ -0,0 +1,194 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% @doc "Mothballed" sequencer code, perhaps to be reused sometime in
%% the future?
-module(machi_sequencer).
-compile(export_all).
-include_lib("kernel/include/file.hrl").
-define(CONFIG_DIR, "./config").
-define(DATA_DIR, "./data").
seq(Server, Prefix, Size) when is_binary(Prefix), is_integer(Size), Size > -1 ->
Server ! {seq, self(), Prefix, Size},
receive
{assignment, File, Offset} ->
{File, Offset}
after 1*1000 ->
bummer
end.
seq_direct(Prefix, Size) when is_binary(Prefix), is_integer(Size), Size > -1 ->
RegName = make_regname(Prefix),
seq(RegName, Prefix, Size).
start_server() ->
start_server(?MODULE).
start_server(Name) ->
spawn_link(fun() -> run_server(Name) end).
run_server(Name) ->
register(Name, self()),
ets:new(?MODULE, [named_table, public, {write_concurrency, true}]),
server_loop().
server_loop() ->
receive
{seq, From, Prefix, Size} ->
spawn(fun() -> server_dispatch(From, Prefix, Size) end),
server_loop()
end.
server_dispatch(From, Prefix, Size) ->
RegName = make_regname(Prefix),
case whereis(RegName) of
undefined ->
start_prefix_server(Prefix),
timer:sleep(1),
server_dispatch(From, Prefix, Size);
Pid ->
Pid ! {seq, From, Prefix, Size}
end,
exit(normal).
start_prefix_server(Prefix) ->
spawn(fun() -> run_prefix_server(Prefix) end).
run_prefix_server(Prefix) ->
true = register(make_regname(Prefix), self()),
ok = filelib:ensure_dir(?CONFIG_DIR ++ "/unused"),
ok = filelib:ensure_dir(?DATA_DIR ++ "/unused"),
FileNum = read_max_filenum(Prefix) + 1,
ok = increment_max_filenum(Prefix),
prefix_server_loop(Prefix, FileNum).
prefix_server_loop(Prefix, FileNum) ->
File = make_data_filename(Prefix, FileNum),
prefix_server_loop(Prefix, File, FileNum, 0).
prefix_server_loop(Prefix, File, FileNum, Offset) ->
receive
{seq, From, Prefix, Size} ->
From ! {assignment, File, Offset},
prefix_server_loop(Prefix, File, FileNum, Offset + Size)
after 30*1000 ->
io:format("timeout: ~p server stopping\n", [Prefix]),
exit(normal)
end.
make_regname(Prefix) ->
erlang:binary_to_atom(Prefix, latin1).
make_config_filename(Prefix) ->
lists:flatten(io_lib:format("~s/~s", [?CONFIG_DIR, Prefix])).
make_data_filename(Prefix, FileNum) ->
erlang:iolist_to_binary(io_lib:format("~s/~s.~w",
[?DATA_DIR, Prefix, FileNum])).
read_max_filenum(Prefix) ->
case file:read_file_info(make_config_filename(Prefix)) of
{error, enoent} ->
0;
{ok, FI} ->
FI#file_info.size
end.
increment_max_filenum(Prefix) ->
{ok, FH} = file:open(make_config_filename(Prefix), [append]),
ok = file:write(FH, "x"),
%% ok = file:sync(FH),
ok = file:close(FH).
%%%%%%%%%%%%%%%%%
%% basho_bench callbacks
-define(SEQ, ?MODULE).
new(1) ->
start_server(),
timer:sleep(100),
{ok, unused};
new(_Id) ->
{ok, unused}.
run(null, _KeyGen, _ValgueGen, State) ->
{ok, State};
run(keygen_then_null, KeyGen, _ValgueGen, State) ->
_Prefix = KeyGen(),
{ok, State};
run(seq, KeyGen, _ValgueGen, State) ->
Prefix = KeyGen(),
{_, _} = ?SEQ:seq(?SEQ, Prefix, 1),
{ok, State};
run(seq_direct, KeyGen, _ValgueGen, State) ->
Prefix = KeyGen(),
Name = ?SEQ:make_regname(Prefix),
case get(Name) of
undefined ->
case whereis(Name) of
undefined ->
{_, _} = ?SEQ:seq(?SEQ, Prefix, 1);
Pid ->
put(Name, Pid),
{_, _} = ?SEQ:seq(Pid, Prefix, 1)
end;
Pid ->
{_, _} = ?SEQ:seq(Pid, Prefix, 1)
end,
{ok, State};
run(seq_ets, KeyGen, _ValgueGen, State) ->
Tab = ?MODULE,
Prefix = KeyGen(),
Res = try
BigNum = ets:update_counter(Tab, Prefix, 1),
BigBin = <<BigNum:80/big>>,
<<FileNum:32/big, Offset:48/big>> = BigBin,
%% if Offset rem 1000 == 0 ->
%% io:format("~p,~p ", [FileNum, Offset]);
%% true ->
%% ok
%% end,
{fakefake, FileNum, Offset}
catch error:badarg ->
FileNum2 = 1, Offset2 = 0,
FileBin = <<FileNum2:32/big>>,
OffsetBin = <<Offset2:48/big>>,
Glop = <<FileBin/binary, OffsetBin/binary>>,
<<Base:80/big>> = Glop,
%% if Prefix == <<"42">> -> io:format("base:~w\n", [Base]); true -> ok end,
%% Base = 0,
case ets:insert_new(Tab, {Prefix, Base}) of
true ->
{<<"fakefakefake">>, Base};
false ->
Result2 = ets:update_counter(Tab, Prefix, 1),
{<<"fakefakefake">>, Result2}
end
end,
Res = Res,
{ok, State}.

View file

@ -47,6 +47,8 @@ start_link() ->
supervisor:start_link({local, ?SERVER}, ?MODULE, []). supervisor:start_link({local, ?SERVER}, ?MODULE, []).
init([]) -> init([]) ->
%% {_, Ps} = process_info(self(), links),
%% [unlink(P) || P <- Ps],
RestartStrategy = one_for_one, RestartStrategy = one_for_one,
MaxRestarts = 1000, MaxRestarts = 1000,
MaxSecondsBetweenRestarts = 3600, MaxSecondsBetweenRestarts = 3600,
@ -60,16 +62,9 @@ init([]) ->
ServerSup = ServerSup =
{machi_flu_sup, {machi_flu_sup, start_link, []}, {machi_flu_sup, {machi_flu_sup, start_link, []},
Restart, Shutdown, Type, []}, Restart, Shutdown, Type, []},
RanchSup = {ranch_sup, {ranch_sup, start_link, []},
Restart, Shutdown, supervisor, [ranch_sup]}, {ok, {SupFlags, [ServerSup]}}.
LifecycleMgr =
{machi_lifecycle_mgr, {machi_lifecycle_mgr, start_link, []}, %% AChild = {'AName', {'AModule', start_link, []},
Restart, Shutdown, worker, []}, %% Restart, Shutdown, Type, ['AModule']},
RunningApps = [A || {A,_D,_V} <- application:which_applications()], %% {ok, {SupFlags, [AChild]}}.
Specs = case lists:member(ranch, RunningApps) of
true ->
[ServerSup, LifecycleMgr];
false ->
[ServerSup, RanchSup, LifecycleMgr]
end,
{ok, {SupFlags, Specs}}.

View file

@ -25,19 +25,18 @@
-export([ -export([
checksum_chunk/1, checksum_chunk/1,
make_tagged_csum/1, make_tagged_csum/2, make_tagged_csum/1, make_tagged_csum/2,
make_client_csum/1,
unmake_tagged_csum/1, unmake_tagged_csum/1,
hexstr_to_bin/1, bin_to_hexstr/1, hexstr_to_bin/1, bin_to_hexstr/1,
hexstr_to_int/1, int_to_hexstr/2, int_to_hexbin/2, hexstr_to_int/1, int_to_hexstr/2, int_to_hexbin/2,
make_binary/1, make_string/1, make_binary/1, make_string/1,
make_regname/1, make_regname/1,
make_config_filename/4, make_config_filename/2, make_config_filename/2,
make_checksum_filename/4, make_checksum_filename/2, make_checksum_filename/4, make_checksum_filename/2,
make_data_filename/6, make_data_filename/2, make_data_filename/4, make_data_filename/2,
make_projection_filename/2, make_projection_filename/2,
is_valid_filename/1, is_valid_filename/1,
parse_filename/1, parse_filename/1,
read_max_filenum/4, increment_max_filenum/4, read_max_filenum/2, increment_max_filenum/2,
info_msg/2, verb/1, verb/2, info_msg/2, verb/1, verb/2,
mbytes/1, mbytes/1,
pretty_time/0, pretty_time/2, pretty_time/0, pretty_time/2,
@ -46,15 +45,12 @@
%% List twiddling %% List twiddling
permutations/1, perms/1, permutations/1, perms/1,
combinations/1, ordered_combinations/1, combinations/1, ordered_combinations/1,
mk_order/2, mk_order/2
%% Other
wait_for_death/2, wait_for_life/2,
bool2int/1,
int2bool/1,
read_opts_default/1,
ns_info_default/1
]). ]).
%% TODO: Leave this in place?
-compile(export_all).
-include("machi.hrl"). -include("machi.hrl").
-include("machi_projection.hrl"). -include("machi_projection.hrl").
-include_lib("kernel/include/file.hrl"). -include_lib("kernel/include/file.hrl").
@ -71,20 +67,10 @@ make_regname(Prefix) when is_list(Prefix) ->
%% @doc Calculate a config file path, by common convention. %% @doc Calculate a config file path, by common convention.
-spec make_config_filename(string(), machi_dt:namespace(), machi_dt:locator(), string()) ->
string().
make_config_filename(DataDir, NS, NSLocator, Prefix) ->
NSLocator_str = int_to_hexstr(NSLocator, 32),
lists:flatten(io_lib:format("~s/config/~s^~s^~s",
[DataDir, Prefix, NS, NSLocator_str])).
%% @doc Calculate a config file path, by common convention.
-spec make_config_filename(string(), string()) -> -spec make_config_filename(string(), string()) ->
string(). string().
make_config_filename(DataDir, Filename) -> make_config_filename(DataDir, Prefix) ->
lists:flatten(io_lib:format("~s/config/~s", lists:flatten(io_lib:format("~s/config/~s", [DataDir, Prefix])).
[DataDir, Filename])).
%% @doc Calculate a checksum file path, by common convention. %% @doc Calculate a checksum file path, by common convention.
@ -105,22 +91,11 @@ make_checksum_filename(DataDir, FileName) ->
%% @doc Calculate a file data file path, by common convention. %% @doc Calculate a file data file path, by common convention.
-spec make_data_filename(string(), machi_dt:namespace(), machi_dt:locator(), string(), atom()|string()|binary(), integer()|string()) -> -spec make_data_filename(string(), string(), atom()|string()|binary(), integer()) ->
{binary(), string()}. {binary(), string()}.
make_data_filename(DataDir, NS, NSLocator, Prefix, SequencerName, FileNum) make_data_filename(DataDir, Prefix, SequencerName, FileNum) ->
when is_integer(FileNum) -> File = erlang:iolist_to_binary(io_lib:format("~s^~s^~w",
NSLocator_str = int_to_hexstr(NSLocator, 32), [Prefix, SequencerName, FileNum])),
File = erlang:iolist_to_binary(io_lib:format("~s^~s^~s^~s^~w",
[Prefix, NS, NSLocator_str, SequencerName, FileNum])),
make_data_filename2(DataDir, File);
make_data_filename(DataDir, NS, NSLocator, Prefix, SequencerName, String)
when is_list(String) ->
NSLocator_str = int_to_hexstr(NSLocator, 32),
File = erlang:iolist_to_binary(io_lib:format("~s^~s^~s^~s^~s",
[Prefix, NS, NSLocator_str, SequencerName, string])),
make_data_filename2(DataDir, File).
make_data_filename2(DataDir, File) ->
FullPath = lists:flatten(io_lib:format("~s/data/~s", [DataDir, File])), FullPath = lists:flatten(io_lib:format("~s/data/~s", [DataDir, File])),
{File, FullPath}. {File, FullPath}.
@ -149,44 +124,34 @@ make_projection_filename(DataDir, File) ->
-spec is_valid_filename( Filename :: string() ) -> true | false. -spec is_valid_filename( Filename :: string() ) -> true | false.
is_valid_filename(Filename) -> is_valid_filename(Filename) ->
case parse_filename(Filename) of case parse_filename(Filename) of
{} -> false; [] -> false;
{_,_,_,_,_} -> true _ -> true
end. end.
%% @doc Given a machi filename, return a set of components in a list. %% @doc Given a machi filename, return a set of components in a list.
%% The components will be: %% The components will be:
%% <ul> %% <ul>
%% <li>Prefix</li> %% <li>Prefix</li>
%% <li>Cluster namespace</li>
%% <li>Cluster locator</li>
%% <li>UUID</li> %% <li>UUID</li>
%% <li>Sequence number</li> %% <li>Sequence number</li>
%% </ul> %% </ul>
%% %%
%% Invalid filenames will return an empty list. %% Invalid filenames will return an empty list.
-spec parse_filename( Filename :: string() ) -> {} | {string(), machi_dt:namespace(), machi_dt:locator(), string(), string() }. -spec parse_filename( Filename :: string() ) -> [ string() ].
parse_filename(Filename) -> parse_filename(Filename) ->
case string:tokens(Filename, "^") of case string:tokens(Filename, "^") of
[Prefix, NS, NSLocator, UUID, SeqNo] -> [_Prefix, _UUID, _SeqNo] = L -> L;
{Prefix, NS, list_to_integer(NSLocator), UUID, SeqNo}; _ -> []
[Prefix, NSLocator, UUID, SeqNo] ->
%% string:tokens() doesn't consider "foo^^bar" as 3 tokens {sigh}
case re:replace(Filename, "[^^]+", "x", [global,{return,binary}]) of
<<"x^^x^x^x">> ->
{Prefix, <<"">>, list_to_integer(NSLocator), UUID, SeqNo};
_ ->
{}
end;
_ -> {}
end. end.
%% @doc Read the file size of a config file, which is used as the %% @doc Read the file size of a config file, which is used as the
%% basis for a minimum sequence number. %% basis for a minimum sequence number.
-spec read_max_filenum(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> -spec read_max_filenum(string(), string()) ->
non_neg_integer(). non_neg_integer().
read_max_filenum(DataDir, NS, NSLocator, Prefix) -> read_max_filenum(DataDir, Prefix) ->
case file:read_file_info(make_config_filename(DataDir, NS, NSLocator, Prefix)) of case file:read_file_info(make_config_filename(DataDir, Prefix)) of
{error, enoent} -> {error, enoent} ->
0; 0;
{ok, FI} -> {ok, FI} ->
@ -196,11 +161,11 @@ read_max_filenum(DataDir, NS, NSLocator, Prefix) ->
%% @doc Increase the file size of a config file, which is used as the %% @doc Increase the file size of a config file, which is used as the
%% basis for a minimum sequence number. %% basis for a minimum sequence number.
-spec increment_max_filenum(string(), machi_dt:namespace(), machi_dt:locator(), string()) -> -spec increment_max_filenum(string(), string()) ->
ok | {error, term()}. ok | {error, term()}.
increment_max_filenum(DataDir, NS, NSLocator, Prefix) -> increment_max_filenum(DataDir, Prefix) ->
try try
{ok, FH} = file:open(make_config_filename(DataDir, NS, NSLocator, Prefix), [append]), {ok, FH} = file:open(make_config_filename(DataDir, Prefix), [append]),
ok = file:write(FH, "x"), ok = file:write(FH, "x"),
ok = file:sync(FH), ok = file:sync(FH),
ok = file:close(FH) ok = file:close(FH)
@ -289,48 +254,22 @@ int_to_hexbin(I, I_size) ->
checksum_chunk(Chunk) when is_binary(Chunk); is_list(Chunk) -> checksum_chunk(Chunk) when is_binary(Chunk); is_list(Chunk) ->
crypto:hash(sha, Chunk). crypto:hash(sha, Chunk).
convert_csum_tag(A) when is_atom(A)->
A;
convert_csum_tag(?CSUM_TAG_NONE) ->
?CSUM_TAG_NONE_ATOM;
convert_csum_tag(?CSUM_TAG_CLIENT_SHA) ->
?CSUM_TAG_CLIENT_SHA_ATOM;
convert_csum_tag(?CSUM_TAG_SERVER_SHA) ->
?CSUM_TAG_SERVER_SHA_ATOM;
convert_csum_tag(?CSUM_TAG_SERVER_REGEN_SHA) ->
?CSUM_TAG_SERVER_REGEN_SHA_ATOM.
%% @doc Create a tagged checksum %% @doc Create a tagged checksum
make_tagged_csum(none) -> make_tagged_csum(none) ->
<<?CSUM_TAG_NONE:8>>; <<?CSUM_TAG_NONE:8>>;
make_tagged_csum(<<>>) ->
<<?CSUM_TAG_NONE:8>>;
make_tagged_csum({Tag, CSum}) -> make_tagged_csum({Tag, CSum}) ->
make_tagged_csum(convert_csum_tag(Tag), CSum). make_tagged_csum(Tag, CSum).
%% @doc Makes tagged csum. Each meanings are: make_tagged_csum(none, _SHA) ->
%% none / ?CSUM_TAG_NONE
%% - a suspicious and nonsense checksum
%% client_sha / ?CSUM_TAG_CLIENT_SHA
%% - a valid checksum given by client and stored in server
%% server_sha / ?CSUM_TAG_SERVER_SHA
%% - a valid checksum generated by and stored in server
%% server_regen_sha / ?CSUM_TAG_SERVER_REGEN_SHA
%% - a valid checksum generated by server in an ad hoc manner, not stored in server
-spec make_tagged_csum(machi_dt:csum_tag(), binary()) -> machi_dt:chunk_csum().
make_tagged_csum(?CSUM_TAG_NONE_ATOM, _SHA) ->
<<?CSUM_TAG_NONE:8>>; <<?CSUM_TAG_NONE:8>>;
make_tagged_csum(?CSUM_TAG_CLIENT_SHA_ATOM, SHA) -> make_tagged_csum(client_sha, SHA) ->
<<?CSUM_TAG_CLIENT_SHA:8, SHA/binary>>; <<?CSUM_TAG_CLIENT_SHA:8, SHA/binary>>;
make_tagged_csum(?CSUM_TAG_SERVER_SHA_ATOM, SHA) -> make_tagged_csum(server_sha, SHA) ->
<<?CSUM_TAG_SERVER_SHA:8, SHA/binary>>; <<?CSUM_TAG_SERVER_SHA:8, SHA/binary>>;
make_tagged_csum(?CSUM_TAG_SERVER_REGEN_SHA_ATOM, SHA) -> make_tagged_csum(server_regen_sha, SHA) ->
<<?CSUM_TAG_SERVER_REGEN_SHA:8, SHA/binary>>. <<?CSUM_TAG_SERVER_REGEN_SHA:8, SHA/binary>>.
make_client_csum(BinOrList) ->
make_tagged_csum(?CSUM_TAG_CLIENT_SHA_ATOM, checksum_chunk(BinOrList)).
unmake_tagged_csum(<<Tag:8, Rest/binary>>) -> unmake_tagged_csum(<<Tag:8, Rest/binary>>) ->
{Tag, Rest}. {Tag, Rest}.
@ -378,19 +317,8 @@ wait_for_death(Pid, Iters) when is_pid(Pid) ->
false -> false ->
ok; ok;
true -> true ->
timer:sleep(10),
wait_for_death(Pid, Iters-1)
end.
wait_for_life(Reg, 0) ->
exit({not_alive_yet, Reg});
wait_for_life(Reg, Iters) when is_atom(Reg) ->
case erlang:whereis(Reg) of
Pid when is_pid(Pid) ->
{ok, Pid};
_ ->
timer:sleep(1), timer:sleep(1),
wait_for_life(Reg, Iters-1) wait_for_death(Pid, Iters-1)
end. end.
%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%
@ -443,23 +371,3 @@ mk_order(UPI2, Repair1) ->
error -> error error -> error
end || X <- UPI2], end || X <- UPI2],
UPI2_order. UPI2_order.
%% C-style conversion for PB usage.
bool2int(true) -> 1;
bool2int(false) -> 0.
int2bool(0) -> false;
int2bool(I) when is_integer(I) -> true.
read_opts_default(#read_opts{}=NSInfo) ->
NSInfo;
read_opts_default(A) when A == 'undefined'; A == 'noopt'; A == 'none' ->
#read_opts{};
read_opts_default(A) when is_atom(A) ->
#read_opts{}.
ns_info_default(#ns_info{}=NSInfo) ->
NSInfo;
ns_info_default(A) when is_atom(A) ->
#ns_info{}.

View file

@ -22,8 +22,6 @@
-module(machi_yessir_client). -module(machi_yessir_client).
-ifdef(TODO_refactoring_deferred).
-include("machi.hrl"). -include("machi.hrl").
-include("machi_projection.hrl"). -include("machi_projection.hrl").
@ -32,7 +30,7 @@
append_chunk/4, append_chunk/5, append_chunk/4, append_chunk/5,
append_chunk_extra/5, append_chunk_extra/6, append_chunk_extra/5, append_chunk_extra/6,
read_chunk/5, read_chunk/6, read_chunk/5, read_chunk/6,
checksum_list/2, checksum_list/3, checksum_list/3, checksum_list/4,
list_files/2, list_files/3, list_files/2, list_files/3,
wedge_status/1, wedge_status/2, wedge_status/1, wedge_status/2,
@ -175,24 +173,24 @@ read_chunk(_Host, _TcpPort, EpochID, File, Offset, Size)
%% @doc Fetch the list of chunk checksums for `File'. %% @doc Fetch the list of chunk checksums for `File'.
checksum_list(#yessir{name=Name,chunk_size=ChunkSize}, File) -> checksum_list(#yessir{name=Name,chunk_size=ChunkSize}, _EpochID, File) ->
case get({Name,offset,File}) of case get({Name,offset,File}) of
undefined -> undefined ->
{error, no_such_file}; {error, no_such_file};
MaxOffset -> MaxOffset ->
C = machi_util:make_tagged_csum(client_sha, C = machi_util:make_tagged_csum(client_sha,
make_csum(Name, ChunkSize)), make_csum(Name, ChunkSize)),
Cs = [{Offset, ChunkSize, C} || Cs = [machi_flu1:encode_csum_file_entry_bin(Offset, ChunkSize, C) ||
Offset <- lists:seq(?MINIMUM_OFFSET, MaxOffset, ChunkSize)], Offset <- lists:seq(?MINIMUM_OFFSET, MaxOffset, ChunkSize)],
{ok, term_to_binary(Cs)} {ok, Cs}
end. end.
%% @doc Fetch the list of chunk checksums for `File'. %% @doc Fetch the list of chunk checksums for `File'.
checksum_list(_Host, _TcpPort, File) -> checksum_list(_Host, _TcpPort, EpochID, File) ->
Sock = connect(#p_srvr{proto_mod=?MODULE}), Sock = connect(#p_srvr{proto_mod=?MODULE}),
try try
checksum_list(Sock, File) checksum_list(Sock, EpochID, File)
after after
disconnect(Sock) disconnect(Sock)
end. end.
@ -452,7 +450,7 @@ connect(#p_srvr{name=Name, props=Props})->
chunk_size=ChunkSize chunk_size=ChunkSize
}, },
%% Add fake dict entries for these files %% Add fake dict entries for these files
_ = [begin [begin
Prefix = list_to_binary(io_lib:format("fake~w", [X])), Prefix = list_to_binary(io_lib:format("fake~w", [X])),
{ok, _} = append_chunk_extra(Sock, {1,<<"unused">>}, Prefix, <<>>, FileSize) {ok, _} = append_chunk_extra(Sock, {1,<<"unused">>}, Prefix, <<>>, FileSize)
end || X <- lists:seq(1, NumFiles)], end || X <- lists:seq(1, NumFiles)],
@ -460,10 +458,10 @@ connect(#p_srvr{name=Name, props=Props})->
Sock. Sock.
disconnect(#yessir{name=Name}) -> disconnect(#yessir{name=Name}) ->
_ = [erase(K) || {{N,offset,_}=K, _V} <- get(), N == Name], [erase(K) || {{N,offset,_}=K, _V} <- get(), N == Name],
_ = [erase(K) || {{N,chunk,_}=K, _V} <- get(), N == Name], [erase(K) || {{N,chunk,_}=K, _V} <- get(), N == Name],
_ = [erase(K) || {{N,csum,_}=K, _V} <- get(), N == Name], [erase(K) || {{N,csum,_}=K, _V} <- get(), N == Name],
_ = [erase(K) || {{N,proj,_,_}=K, _V} <- get(), N == Name], [erase(K) || {{N,proj,_,_}=K, _V} <- get(), N == Name],
ok. ok.
%% Example use: %% Example use:
@ -511,5 +509,3 @@ disconnect(#yessir{name=Name}) ->
%% =INFO REPORT==== 17-May-2015::18:57:52 === %% =INFO REPORT==== 17-May-2015::18:57:52 ===
%% Repair success: tail a of [a] finished ap_mode repair ID {a,{1431,856671,140404}}: ok %% Repair success: tail a of [a] finished ap_mode repair ID {a,{1431,856671,140404}}: ok
%% Stats [{t_in_files,0},{t_in_chunks,10413},{t_in_bytes,682426368},{t_out_files,0},{t_out_chunks,10413},{t_out_bytes,682426368},{t_bad_chunks,0},{t_elapsed_seconds,1.591}] %% Stats [{t_in_files,0},{t_in_chunks,10413},{t_in_bytes,682426368},{t_out_files,0},{t_out_chunks,10413},{t_out_bytes,682426368},{t_bad_chunks,0},{t_elapsed_seconds,1.591}]
-endif. % TODO_refactoring_deferred

View file

@ -33,33 +33,25 @@
-define(FLU_C, machi_flu1_client). -define(FLU_C, machi_flu1_client).
verify_file_checksums_test_() -> verify_file_checksums_test_() ->
{setup, {timeout, 60, fun() -> verify_file_checksums_test2() end}.
fun() -> os:cmd("rm -rf ./data") end,
fun(_) -> os:cmd("rm -rf ./data") end,
{timeout, 60, fun() -> verify_file_checksums_test2() end}
}.
verify_file_checksums_test2() -> verify_file_checksums_test2() ->
Host = "localhost", Host = "localhost",
TcpPort = 32958, TcpPort = 32958,
DataDir = "./data", DataDir = "./data",
W_props = [{initial_wedged, false}], W_props = [{initial_wedged, false}],
NSInfo = undefined, machi_flu1_test:start_flu_package(verify1_flu, TcpPort, DataDir,
NoCSum = <<>>,
try
machi_test_util:start_flu_package(verify1_flu, TcpPort, DataDir,
W_props), W_props),
Sock1 = ?FLU_C:connect(#p_srvr{address=Host, port=TcpPort}), Sock1 = ?FLU_C:connect(#p_srvr{address=Host, port=TcpPort}),
try try
Prefix = <<"verify_prefix">>, Prefix = <<"verify_prefix">>,
NumChunks = 10, NumChunks = 10,
[{ok, _} = ?FLU_C:append_chunk(Sock1, NSInfo, ?DUMMY_PV1_EPOCH, [{ok, _} = ?FLU_C:append_chunk(Sock1, ?DUMMY_PV1_EPOCH,
Prefix, <<X:(X*8)/big>>, NoCSum) || Prefix, <<X:(X*8)/big>>) ||
X <- lists:seq(1, NumChunks)], X <- lists:seq(1, NumChunks)],
{ok, [{_FileSize,File}]} = ?FLU_C:list_files(Sock1, ?DUMMY_PV1_EPOCH), {ok, [{_FileSize,File}]} = ?FLU_C:list_files(Sock1, ?DUMMY_PV1_EPOCH),
?assertEqual({ok, []}, {ok, []} = machi_admin_util:verify_file_checksums_remote(
machi_admin_util:verify_file_checksums_remote( Host, TcpPort, ?DUMMY_PV1_EPOCH, File),
Host, TcpPort, ?DUMMY_PV1_EPOCH, File)),
%% Clobber the first 3 chunks, which are sizes 1/2/3. %% Clobber the first 3 chunks, which are sizes 1/2/3.
{_, Path} = machi_util:make_data_filename(DataDir,binary_to_list(File)), {_, Path} = machi_util:make_data_filename(DataDir,binary_to_list(File)),
@ -82,10 +74,8 @@ verify_file_checksums_test2() ->
ok ok
after after
catch ?FLU_C:quit(Sock1) catch ?FLU_C:quit(Sock1),
end catch machi_flu1_test:stop_flu_package(verify1_flu)
after
catch machi_test_util:stop_flu_package()
end. end.
-endif. % !PULSE -endif. % !PULSE

View file

@ -1,616 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
%% EQC single-threaded and concurrent test for file operations and repair
%% under simulated network partition.
%% The main purpose is to confirm no dataloss, i.e. every chunk that
%% has been successfully written (ACK received) by append/write
%% opration will be read after partition heals.
%%
%% All updating -- append, write and trim -- operations are executed
%% through CR client, not directly by flu1 client, in order to be
%% end-to-end test (in single chain point of veiw.) There may be churn
%% for projections by simulated network partition.
%%
%% Test steps
%% 1. Setup single chain.
%% 2. Execute updating operations and simulated partition (by eqc_statem).
%% Every updating results are recorded in ETS tables.
%% 3. When {error, timeout|partition} happens, trigger management tick for
%% every chain manager process.
%% 4. After commands are executed, remove patition and wait for the chain
%% without down nodes nor repairing nodes.
%% 5. Asserting written results so that each record be read from the
%% chain and data be the same with written one.
%% Improvements to-do's
%% - Use higher concurrency, e.g. 10+
%% - Random length for binary to write
%% - Operations other than append, write, trim
%% - Use checksum instead of binary to save memory
%% - More variety for partitioning pattern: non-constant failure
%% - Stop and restart
%% - Suspend and resume of some erlang processes
-module(machi_ap_repair_eqc).
-ifdef(TEST).
-ifdef(EQC).
-compile(export_all).
-include("machi.hrl").
-include("machi_projection.hrl").
-include("machi_verbose.hrl").
-include_lib("eqc/include/eqc.hrl").
-include_lib("eqc/include/eqc_statem.hrl").
-include_lib("eunit/include/eunit.hrl").
-record(target, {verbose=false,
flu_names,
mgr_names}).
-record(state, {num,
verbose=false,
flu_names,
mgr_names,
cr_count}).
%% ETS table names
-define(WRITTEN_TAB, written). % Successfully written data
-define(ACCPT_TAB, accpt). % Errors with no harm, e.g. timeout
-define(FAILED_TAB, failed). % Uncategorized errors, when happenes
% it should be re-categorized to accept or critical
-define(CRITICAL_TAB, critical). % Critical errors, e.g. double write to the same key
-define(QC_OUT(P),
eqc:on_output(fun(Str, Args) -> io:format(user, Str, Args) end, P)).
%% EUNIT TEST DEFINITION
prop_repair_test_() ->
{PropTO, EUnitTO} = eqc_timeout(60),
Verbose = eqc_verbose(),
{spawn,
[{timeout, EUnitTO,
?_assertEqual(
true,
eqc:quickcheck(eqc:testing_time(
PropTO, ?QC_OUT(noshrink(prop_repair(Verbose))))))}]}.
prop_repair_par_test_() ->
{PropTO, EUnitTO} = eqc_timeout(60),
Verbose = eqc_verbose(),
{spawn,
[{timeout, EUnitTO,
?_assertEqual(
true,
eqc:quickcheck(eqc:testing_time(
PropTO, ?QC_OUT(noshrink(prop_repair_par(Verbose))))))}]}.
%% Model
weight(_S, change_partition) -> 20;
weight(_S, _) -> 100.
%% Append
append_args(#state{cr_count=CRCount}=S) ->
[choose(1, CRCount), chunk(), S].
append(CRIndex, Bin, #state{verbose=V}=S) ->
CRList = cr_list(),
{_SimSelfName, C} = lists:nth(CRIndex, CRList),
Prefix = <<"pre">>,
Len = byte_size(Bin),
NSInfo = #ns_info{},
NoCSum = <<>>,
Opts1 = #append_opts{},
Res = (catch machi_cr_client:append_chunk(C, NSInfo, Prefix, Bin, NoCSum, Opts1, sec(1))),
case Res of
{ok, {_Off, Len, _FileName}=Key} ->
case ets:insert_new(?WRITTEN_TAB, {Key, Bin}) of
true ->
[?V("<o>", []) || V],
ok;
false ->
%% The Key is alread written, WHY!!!????
case ets:lookup(?WRITTEN_TAB, Key) of
[{Key, Bin}] ->
%% TODO: The identical binary is alread inserted in
%% written table. Is this acceptable??? Hmm, maybe NO...
[?V("<dws:~w>", [Key]) || V],
true = ets:insert_new(?ACCPT_TAB,
{make_ref(), double_write_same, Key}),
{acceptable_error, doublewrite_the_same};
[{Key, OtherBin}] ->
[?V("<dwd:~w:~w>", [Key, {OtherBin, Bin}]) || V],
true = ets:insert_new(?CRITICAL_TAB,
{make_ref(), double_write_diff, Key}),
R = {critical_error,
{doublewrite_diff, Key, {OtherBin, Bin}}},
%% TODO: when double write happens, it seems that
%% repair process got stack with endless loop. To
%% avoit it, return error here.
%% If this error/1 will be removed, one can possibly
%% know double write frequency/rate.
error(R)
end
end;
{error, partition} ->
[?V("<pt>", []) || V],
true = ets:insert_new(?ACCPT_TAB, {make_ref(), timeout}),
_ = tick(S),
{acceptable_error, partition};
{'EXIT', {timeout, _}} ->
[?V("<to:~w:~w>", [_SimSelfName, C]) || V],
true = ets:insert_new(?ACCPT_TAB, {make_ref(), timeout}),
_ = tick(S),
{acceptable_error, timeout};
{ok, {_Off, UnexpectedLen, _FileName}=Key} ->
[?V("<XX>", []) || V],
true = ets:insert_new(?CRITICAL_TAB, {make_ref(), unexpected_len, Key}),
{critical_error, {unexpected_len, Key, Len, UnexpectedLen}};
{error, _Reason} = Error ->
[?V("<er>", []) || V],
true = ets:insert_new(?FAILED_TAB, {make_ref(), Error}),
{other_error, Error};
Other ->
[?V("<er>", []) || V],
true = ets:insert_new(?FAILED_TAB, {make_ref(), Other}),
{other_error, Other}
end.
%% Change partition
change_partition_args(#state{flu_names=FLUNames}=S) ->
%% [partition(FLUNames), S].
[partition_sym(FLUNames), S].
change_partition(Partition,
#state{verbose=Verbose, flu_names=FLUNames}=S) ->
[case Partition of
[] -> ?V("## Turn OFF partition: ~w~n", [Partition]);
_ -> ?V("## Turn ON partition: ~w~n", [Partition])
end || Verbose],
machi_partition_simulator:always_these_partitions(Partition),
_ = machi_partition_simulator:get(FLUNames),
%% Don't wait for stable chain, tick will be executed on demand
%% in append oprations
_ = tick(S),
ok.
%% Generators
num() ->
choose(2, 5).
cr_count(Num) ->
Num * 3.
%% Returns a list like
%% `[{#p_srvr{name=a, port=7501, ..}, "./eqc/data.eqc.a/"}, ...]'
all_list_extra(Num) ->
{PortBase, DirBase} = get_port_dir_base(),
[begin
FLUNameStr = [$a + I - 1],
FLUName = list_to_atom(FLUNameStr),
MgrName = machi_flu_psup:make_mgr_supname(FLUName),
{#p_srvr{name=FLUName, address="localhost", port=PortBase+I,
props=[{chmgr, MgrName}]},
DirBase ++ "/data.eqc." ++ FLUNameStr}
end || I <- lists:seq(1, Num)].
sublist(L) ->
?LET(K, nat(),
?LET(L2, eqc_gen:vector(K, eqc_gen:oneof(L)),
lists:usort(L2))).
%% Generator for possibly assymmetric partition information
partition(FLUNames) ->
frequency([{10, return([])},
{20, non_empty(sublist(flu_ordered_pairs(FLUNames)))}]).
%% Generator for symmetric partition information
partition_sym(FLUNames) ->
?LET(Pairs, non_empty(sublist(flu_pairs(FLUNames))),
lists:flatmap(fun({One, Another}) -> [{One, Another}, {Another, One}] end,
Pairs)).
flu_ordered_pairs(FLUNames) ->
[{From, To} || From <- FLUNames, To <- FLUNames, From =/= To].
flu_pairs(FLUNames) ->
[{One, Another} || One <- FLUNames, Another <- FLUNames, One > Another].
chunk() ->
non_empty(binary(10)).
%% Properties
prop_repair(Verbose) ->
error_logger:tty(false),
application:load(sasl),
application:set_env(sasl, sasl_error_logger, false),
Seed = {1445,935441,287549},
?FORALL(Num, num(),
?FORALL(Cmds, commands(?MODULE, initial_state(Num, Verbose)),
begin
Target = setup_target(Num, Seed, Verbose),
{H, S1, Res0} = run_commands(?MODULE, Cmds),
%% ?V("S1=~w~n", [S1]),
?V("==== Start post operations, stabilize and confirm results~n", []),
_ = stabilize(commands_len(Cmds), Target),
{Dataloss, Critical} = confirm_result(Target),
_ = cleanup(Target),
pretty_commands(
?MODULE, Cmds, {H, S1, Res0},
aggregate(with_title(cmds), command_names(Cmds),
collect(with_title(length5), (length(Cmds) div 5) * 5,
{Dataloss, Critical} =:= {0, 0})))
end)).
prop_repair_par(Verbose) ->
error_logger:tty(false),
application:load(sasl),
application:set_env(sasl, sasl_error_logger, false),
Seed = {1445,935441,287549},
?FORALL(Num, num(),
?FORALL(Cmds,
%% Now try-and-err'ing, how to control command length and concurrency?
?SUCHTHAT(Cmds0, ?SIZED(Size, resize(Size,
parallel_commands(?MODULE, initial_state(Num, Verbose)))),
commands_len(Cmds0) > 20
andalso
concurrency(Cmds0) > 2),
begin
CmdsLen= commands_len(Cmds),
Target = setup_target(Num, Seed, Verbose),
{Seq, Par, Res0} = run_parallel_commands(?MODULE, Cmds),
%% ?V("Seq=~w~n", [Seq]),
%% ?V("Par=~w~n", [Par]),
?V("==== Start post operations, stabilize and confirm results~n", []),
{FinalRes, {Dataloss, Critical}} =
case Res0 of
ok ->
Res1 = stabilize(CmdsLen, Target),
{Res1, confirm_result(Target)};
_ ->
?V("Res0=~w~n", [Res0]),
{Res0, {undefined, undefined}}
end,
_ = cleanup(Target),
%% Process is leaking? This log line can be removed after fix.
[?V("process_count=~w~n", [erlang:system_info(process_count)]) || Verbose],
pretty_commands(
?MODULE, Cmds, {Seq, Par, Res0},
aggregate(with_title(cmds), command_names(Cmds),
collect(with_title(length5), (CmdsLen div 5) * 5,
collect(with_title(conc), concurrency(Cmds),
{FinalRes, {Dataloss, Critical}} =:= {ok, {0, 0}})))
)
end)).
%% Initilization / setup
%% Fake initialization function for debugging in shell like:
%% > eqc_gen:sample(eqc_statem:commands(machi_ap_repair_eqc)).
%% but not so helpful.
initial_state() ->
#state{cr_count=3}.
initial_state(Num, Verbose) ->
AllListE = all_list_extra(Num),
FLUNames = [P#p_srvr.name || {P, _Dir} <- AllListE],
MgrNames = [{Name, machi_flu_psup:make_mgr_supname(Name)} || Name <- FLUNames],
#state{num=Num, verbose=Verbose,
flu_names=FLUNames, mgr_names=MgrNames,
cr_count=cr_count(Num)}.
setup_target(Num, Seed, Verbose) ->
%% ?V("setup_target(Num=~w, Seed=~w~nn", [Num, Seed]),
AllListE = all_list_extra(Num),
FLUNames = [P#p_srvr.name || {P, _Dir} <- AllListE],
MgrNames = [{Name, machi_flu_psup:make_mgr_supname(Name)} || Name <- FLUNames],
Dict = orddict:from_list([{P#p_srvr.name, P} || {P, _Dir} <- AllListE]),
setup_chain(Seed, AllListE, FLUNames, MgrNames, Dict),
_ = setup_cpool(AllListE, FLUNames, Dict),
Target = #target{flu_names=FLUNames, mgr_names=MgrNames,
verbose=Verbose},
%% Don't wait for complete chain. Even partialy completed, the chain
%% should work fine. Right?
wait_until_stable(chain_state_all_ok(FLUNames), FLUNames, MgrNames,
20, Verbose),
Target.
setup_chain(Seed, AllListE, FLUNames, MgrNames, Dict) ->
ok = shutdown_hard(),
[begin
machi_test_util:clean_up_dir(Dir),
filelib:ensure_dir(Dir ++ "/not-used")
end || {_P, Dir} <- AllListE],
[catch ets:delete(T) || T <- tabs()],
[ets:new(T, [set, public, named_table,
{write_concurrency, true}, {read_concurrency, true}]) ||
T <- tabs()],
{ok, _} = application:ensure_all_started(machi),
SimSpec = {part_sim,
{machi_partition_simulator, start_link, [{0,0,0}, 0, 100]},
permanent, 500, worker, []},
{ok, _PSimPid} = supervisor:start_child(machi_sup, SimSpec),
ok = machi_partition_simulator:set_seed(Seed),
_Partitions = machi_partition_simulator:get(FLUNames),
%% Start FLUs and setup the chain
FLUOpts = [{use_partition_simulator, true},
%% {private_write_verbose, true},
{active_mode, false},
{simulate_repair, false}],
[{ok, _} = machi_flu_psup:start_flu_package(Name, Port, Dir, FLUOpts) ||
{#p_srvr{name=Name, port=Port}, Dir} <- AllListE],
[machi_chain_manager1:set_chain_members(MgrName, Dict) || {_, MgrName} <- MgrNames],
ok.
setup_cpool(AllListE, FLUNames, Dict) ->
Num = length(AllListE),
FCList = [begin
{ok, PCPid} = machi_proxy_flu1_client:start_link(P),
{Name, PCPid}
end || {_, #p_srvr{name=Name}=P} <- Dict],
%% CR clients are pooled, each has "name" which is interpreted "From"
%% side of simulated partition.
SimSelfNames = lists:append(lists:duplicate(cr_count(Num), FLUNames)),
CRList = [begin
{ok, C} = machi_cr_client:start_link(
[P || {_, P} <- Dict],
[{use_partition_simulator, true},
{simulator_self_name, SimSelfName},
{simulator_members, FLUNames}]),
{SimSelfName, C}
end || SimSelfName <- SimSelfNames],
catch ets:delete(cpool),
ets:new(cpool, [set, protected, named_table, {read_concurrency, true}]),
ets:insert(cpool, {fc_list, FCList}),
ets:insert(cpool, {cr_list, CRList}),
{CRList, FCList}.
fc_list() ->
[{fc_list, FCList}] = ets:lookup(cpool, fc_list),
FCList.
cr_list() ->
[{cr_list, CRList}] = ets:lookup(cpool, cr_list),
CRList.
%% Post run_commands
stabilize(0, _T) ->
ok;
stabilize(_CmdsLen, #target{flu_names=FLUNames, mgr_names=MgrNames,
verbose=Verbose}) ->
machi_partition_simulator:no_partitions(),
true = wait_until_stable(chain_state_all_ok(FLUNames), FLUNames, MgrNames,
100, Verbose),
ok.
chain_state_all_ok(FLUNames) ->
[{FLUName, {FLUNames, [], []}} || FLUName <- FLUNames].
confirm_result(_T) ->
[{_, C} | _] = cr_list(),
[{written, _Written}, {accpt, Accpt},
{failed, Failed}, {critical, Critical}] = tab_counts(),
{OK, Dataloss} = confirm_written(C),
?V(" Written=~w, DATALOSS=~w, Acceptable=~w~n", [OK, Dataloss, Accpt]),
?V(" Failed=~w, Critical=~w~n~n", [Failed, Critical]),
DirBase = get_dir_base(),
Suffix = dump_file_suffix(),
case Failed of
0 -> ok;
_ ->
DumpFailed = filename:join(DirBase, "dump-failed-" ++ Suffix),
?V("Dump failed ETS tab to: ~s~n", [DumpFailed]),
ets:tab2file(?FAILED_TAB, DumpFailed)
end,
case Critical of
0 -> ok;
_ ->
DumpCritical = filename:join(DirBase, "dump-critical-" ++ Suffix),
?V("Dump critical ETS tab to: ~w~n", [DumpCritical]),
ets:tab2file(?CRITICAL_TAB, DumpCritical)
end,
{Dataloss, Critical}.
confirm_written(C) ->
ets:foldl(
fun({Key, Bin}, {OK, NG}) ->
case assert_chunk(C, Key, Bin) of
ok -> {OK+1, NG};
{error, _} -> {OK, NG+1}
end
end, {0, 0}, ?WRITTEN_TAB).
assert_chunk(C, {Off, Len, FileName}=Key, Bin) ->
%% TODO: This probably a bug, read_chunk respnds with filename of `string()' type
%% TODO : Use CSum instead of binary (after disuccsion about CSum is calmed down?)
NSInfo = undefined,
case (catch machi_cr_client:read_chunk(C, NSInfo, FileName, Off, Len, undefined, sec(3))) of
{ok, {[{FileName, Off, Bin, _}], []}} ->
ok;
{ok, Got} ->
?V("read_chunk got different binary for Key=~p~n", [Key]),
?V(" Expected: ~p~n", [{[{FileName, Off, Bin, <<"CSum-NYI">>}], []}]),
?V(" Got: ~p~n", [Got]),
{error, different_binary};
{error, Reason} ->
?V("read_chunk error for Key=~p: ~p~n", [Key, Reason]),
{error, Reason};
Other ->
?V("read_chunk other error for Key=~p: ~p~n", [Key, Other]),
{error, Other}
end.
cleanup(_Target) ->
[begin unlink(FC), catch exit(FC, kill) end || {_, FC} <- fc_list()],
[begin unlink(CR), catch exit(CR, kill) end || {_, CR} <- cr_list()],
_ = shutdown_hard().
%% Internal misc utilities
eqc_verbose() ->
os:getenv("EQC_VERBOSE") =:= "true".
eqc_timeout(Default) ->
PropTimeout = case os:getenv("EQC_TIME") of
false -> Default;
V -> list_to_integer(V)
end,
{PropTimeout, PropTimeout * 300}.
get_port_dir_base() ->
I = case os:getenv("EQC_BASE_PORT") of
false -> 0;
V -> list_to_integer(V)
end,
D = get_dir_base(),
{7400 + (I * 100), D ++ "/" ++ integer_to_list(I)}.
get_dir_base() ->
case os:getenv("EQC_BASE_DIR") of
false -> "./eqc";
DD -> DD
end.
shutdown_hard() ->
_STOP = application:stop(machi),
timer:sleep(100).
tick(#state{flu_names=FLUNames, mgr_names=MgrNames,
verbose=Verbose}) ->
tick(FLUNames, MgrNames, Verbose).
tick(FLUNames, MgrNames, Verbose) ->
tick(FLUNames, MgrNames, 2, 100, Verbose).
tick(FLUNames, MgrNames, Iter, SleepMax, Verbose) ->
TickFun = tick_fun(FLUNames, MgrNames, self()),
TickFun(Iter, 0, SleepMax),
FCList = fc_list(),
[?V("## Chain state after tick()=~w~n", [chain_state(FCList)]) || Verbose].
tick_fun(FLUNames, MgrNames, Parent) ->
fun(Iters, SleepMin, SleepMax) ->
%% ?V("^", []),
Trigger =
fun(FLUName, MgrName) ->
random:seed(now()),
[begin
erlang:yield(),
SleepMaxRand = random:uniform(SleepMax + 1),
%% io:format(user, "{t}", []),
Elapsed = machi_chain_manager1:sleep_ranked_order(
SleepMin, SleepMaxRand,
FLUName, FLUNames),
MgrName ! tick_check_environment,
%% Be more unfair by not sleeping here.
timer:sleep(max(SleepMax - Elapsed, 1)),
ok
end || _ <- lists:seq(1, Iters)],
Parent ! {done, self()}
end,
Pids = [{spawn(fun() -> Trigger(FLUName, MgrName) end), FLUName} ||
{FLUName, MgrName} <- MgrNames ],
[receive
{done, ThePid} ->
ok
after 120*1000 ->
exit({icky_timeout, M_name})
end || {ThePid, M_name} <- Pids]
end.
wait_until_stable(ExpectedChainState, FLUNames, MgrNames, Verbose) ->
wait_until_stable(ExpectedChainState, FLUNames, MgrNames, 20, Verbose).
wait_until_stable(ExpectedChainState, FLUNames, MgrNames, Retries, Verbose) ->
TickFun = tick_fun(FLUNames, MgrNames, self()),
FCList = fc_list(),
wait_until_stable1(ExpectedChainState, TickFun, FCList, Retries, Verbose).
wait_until_stable1(ExpectedChainState, _TickFun, FCList, 0, _Verbose) ->
?V(" [ERROR] _ExpectedChainState ~p\n", [ExpectedChainState]),
?V(" [ERROR] wait_until_stable failed.... : ~p~n", [chain_state(FCList)]),
?V(" [ERROR] norm.... : ~p~n", [normalize_chain_state(chain_state(FCList))]),
false;
wait_until_stable1(ExpectedChainState, TickFun, FCList, Reties, Verbose) ->
[TickFun(3, 0, 100) || _ <- lists:seq(1, 3)],
Normalized = normalize_chain_state(chain_state(FCList)),
case Normalized of
ExpectedChainState ->
[?V(" Got stable chain: ~w~n", [chain_state(FCList)]) || Verbose],
true;
_ ->
[?V(" NOT YET stable chain: ~w~n", [chain_state(FCList)]) || Verbose],
wait_until_stable1(ExpectedChainState, TickFun, FCList, Reties-1, Verbose)
end.
normalize_chain_state(ChainState) ->
lists:usort([{FLUName,
{lists:usort(UPI), lists:usort(Repairing), lists:usort(Down)}} ||
{FLUName, {_EpochNo, UPI, Repairing, Down}} <- ChainState]).
chain_state(FCList) ->
lists:usort(
[case (catch machi_proxy_flu1_client:read_latest_projection(C, private, sec(5))) of
{ok, #projection_v1{epoch_number=EpochNo, upi=UPI,
repairing=Repairing, down=Down}} ->
{FLUName, {EpochNo, UPI, Repairing, Down}};
Other ->
{FLUName, Other}
end || {FLUName, C} <- FCList]).
tabs() -> [?WRITTEN_TAB, ?ACCPT_TAB, ?FAILED_TAB, ?CRITICAL_TAB].
tab_counts() ->
[{T, ets:info(T, size)} || T <- tabs()].
sec(Sec) ->
timer:seconds(Sec).
commands_len({SeqCmds, ParCmdsList} = _Cmds) ->
lists:sum([length(SeqCmds) | [length(P) || P <- ParCmdsList]]);
commands_len(Cmds) ->
length(Cmds).
concurrency({_SeqCmds, ParCmdsList} = _Cmds) -> length(ParCmdsList);
concurrency(_) -> 1.
dump_file_suffix() ->
{{Year, Month, Day}, {Hour, Min, Sec}} = calendar:local_time(),
lists:flatten(
io_lib:format("~4.10.0B-~2.10.0B-~2.10.0BT~2.10.0B:~2.10.0B:~2.10.0B.000Z",
[Year, Month, Day, Hour, Min, Sec])).
-endif. % EQC
-endif. % TEST

View file

@ -45,18 +45,14 @@
-include_lib("eunit/include/eunit.hrl"). -include_lib("eunit/include/eunit.hrl").
help() ->
io:format("~s\n", [short_doc()]).
short_doc() -> short_doc() ->
" "
A visualization of the convergence behavior of the chain self-management A visualization of the convergence behavior of the chain self-management
algorithm for Machi. algorithm for Machi.
1. Set up 4 FLUs and chain manager pairs.
1. Set up some server and chain manager pairs.
2. Create a number of different network partition scenarios, where 2. Create a number of different network partition scenarios, where
(simulated) partitions may be symmetric or asymmetric. Then stop changing (simulated) partitions may be symmetric or asymmetric. Then halt changing
the partitions and keep the simulated network stable (and perhaps broken). the partitions and keep the simulated network stable and broken.
3. Run a number of iterations of the algorithm in parallel by poking each 3. Run a number of iterations of the algorithm in parallel by poking each
of the manager processes on a random'ish basis. of the manager processes on a random'ish basis.
4. Afterward, fetch the chain transition changes made by each FLU and 4. Afterward, fetch the chain transition changes made by each FLU and
@ -65,65 +61,73 @@ algorithm for Machi.
During the iteration periods, the following is a cheatsheet for the output. During the iteration periods, the following is a cheatsheet for the output.
See the internal source for interpreting the rest of the output. See the internal source for interpreting the rest of the output.
'SET partitions = ' 'Let loose the dogs of war!' Network instability
'SET partitions = ' Network stability (but broken)
'x uses:' The FLU x has made an internal state transition. The rest of
the line is a dump of internal state.
'{t}' This is a tick event which triggers one of the manager processes
to evaluate its environment and perhaps make a state transition.
A pair-wise list of actors which cannot send messages. The A long chain of '{t}{t}{t}{t}' means that the chain state has settled
list is uni-directional. If there are three servers (a,b,c), to a stable configuration, which is the goal of the algorithm.
and if the partitions list is '[{a,b},{b,c}]' then all Press control-c to interrupt....".
messages from a->b and b->c will be dropped, but any other
sender->recipient messages will be delivered successfully.
'x uses:' long_doc() ->
"
'Let loose the dogs of war!'
The FLU x has made an internal state transition and is using The simulated network is very unstable for a few seconds.
this epoch's projection as operating chain configuration. The
rest of the line is a summary of the projection.
'CONFIRM epoch {N}' 'x uses'
This message confirms that all of the servers listed in the After a single iteration, server x has determined that the chain
UPI and repairing lists of the projection at epoch {N} have should be defined by the upi, repair, and down list in this record.
agreed to use this projection because they all have written If all participants reach the same conclusion at the same epoch
this projection to their respective private projection stores. number (and checksum, see next item below), then the chain is
The chain is now usable by/available to all clients. stable, fully configured, and can provide full service.
'Sweet, private projections are stable' 'epoch,E'
This report announces that this iteration of the test cycle The epoch number for this decision is E. The checksum of the full
has passed successfully. The report that follows briefly record is not shown. For purposes of the protocol, a server will
summarizes the latest private projection used by each 'wedge' itself and refuse service (until a new config is chosen)
participating server. For example, when in strong consistency whenever: a). it sees a bigger epoch number mentioned somewhere, or
mode with 'a' as a witness and 'b' and 'c' as real servers: b). it sees the same epoch number but a different checksum. In case
of b), there was a network partition that has healed, and both sides
had chosen to operate with an identical epoch number but different
chain configs.
%% Legend: 'upi', 'repair', and 'down'
%% server name, epoch ID, UPI list, repairing list, down list, ...
%% ... witness list, 'false' (a constant value)
[{a,{{1116,<<23,143,246,55>>},[a,b],[],[c],[a],false}}, Members in the chain that are fully in sync and thus preserving the
{b,{{1116,<<23,143,246,55>>},[a,b],[],[c],[a],false}}] Update Propagation Invariant, up but under repair (simulated), and
down, respectively.
Both servers 'a' and 'b' agree on epoch 1116 with epoch ID 'ps,[some list]'
{1116,<<23,143,246,55>>} where UPI=[a,b], repairing=[],
down=[c], and witnesses=[a].
Server 'c' is not shown because 'c' has wedged itself OOS (out The list of asymmetric network partitions. {a,b} means that a
of service) by configuring a chain length of zero. cannot send to b, but b can send to a.
If no servers are listed in the report (i.e. only '[]' is This partition list is recorded for debugging purposes but is *not*
displayed), then all servers have wedged themselves OOS, and used by the algorithm. The algorithm only 'feels' its effects via
the chain is unavailable. simulated timeout whenever there's a partition in one of the
messaging directions.
'DoIt,' 'nodes_up,[list]'
This marks a group of tick events which trigger the manager The best guess right now of which ndoes are up, relative to the
processes to evaluate their environment and perhaps make a author node, specified by '{author,X}'
state transition.
A long chain of 'DoIt,DoIt,DoIt,' means that the chain state has 'SET partitions = [some list]'
(probably) settled to a stable configuration, which is the goal of the
algorithm.
Press control-c to interrupt the test....". All subsequent iterations should have a stable list of partitions,
i.e. the 'ps' list described should be stable.
'{FLAP: x flaps n}!'
Server x has detected that it's flapping/oscillating after iteration
n of a naive/1st draft detection algorithm.
".
%% ' silly Emacs syntax highlighting.... %% ' silly Emacs syntax highlighting....
@ -134,7 +138,6 @@ Press control-c to interrupt the test....".
%% convergence_demo_testfun(3). %% convergence_demo_testfun(3).
-define(DEFAULT_MGR_OPTS, [{private_write_verbose, false}, -define(DEFAULT_MGR_OPTS, [{private_write_verbose, false},
{private_write_verbose_confirm, true},
{active_mode,false}, {active_mode,false},
{use_partition_simulator, true}]). {use_partition_simulator, true}]).
@ -151,8 +154,7 @@ convergence_demo_testfun(NumFLUs, MgrOpts0) ->
%% Faster test startup, commented: io:format(user, short_doc(), []), %% Faster test startup, commented: io:format(user, short_doc(), []),
%% Faster test startup, commented: timer:sleep(3000), %% Faster test startup, commented: timer:sleep(3000),
Apps = [sasl, ranch], application:start(sasl),
[application:start(App) || App <- Apps],
MgrOpts = MgrOpts0 ++ ?DEFAULT_MGR_OPTS, MgrOpts = MgrOpts0 ++ ?DEFAULT_MGR_OPTS,
TcpPort = proplists:get_value(port_base, MgrOpts, 62877), TcpPort = proplists:get_value(port_base, MgrOpts, 62877),
@ -189,18 +191,15 @@ convergence_demo_testfun(NumFLUs, MgrOpts0) ->
end || #p_srvr{name=Name}=P <- Ps], end || #p_srvr{name=Name}=P <- Ps],
MembersDict = machi_projection:make_members_dict(Ps), MembersDict = machi_projection:make_members_dict(Ps),
Witnesses = proplists:get_value(witnesses, MgrOpts, []), Witnesses = proplists:get_value(witnesses, MgrOpts, []),
CMode = case {Witnesses, proplists:get_value(consistency_mode, MgrOpts,
ap_mode)} of
{[_|_], _} -> cp_mode;
{_, cp_mode} -> cp_mode;
{_, ap_mode} -> ap_mode
end,
MgrNamez = [begin MgrNamez = [begin
MgrName = machi_flu_psup:make_mgr_supname(Name), MgrName = machi_flu_psup:make_mgr_supname(Name),
ok = ?MGR:set_chain_members(MgrName, ch_demo, 0, CMode, ok = ?MGR:set_chain_members(MgrName,MembersDict,Witnesses),
MembersDict,Witnesses),
{Name, MgrName} {Name, MgrName}
end || #p_srvr{name=Name} <- Ps], end || #p_srvr{name=Name} <- Ps],
CpApMode = case Witnesses /= [] of
true -> cp_mode;
false -> ap_mode
end,
try try
[{_, Ma}|_] = MgrNamez, [{_, Ma}|_] = MgrNamez,
@ -296,7 +295,7 @@ convergence_demo_testfun(NumFLUs, MgrOpts0) ->
private_projections_are_stable(Namez, DoIt) private_projections_are_stable(Namez, DoIt)
end, false, lists:seq(0, MaxIters)), end, false, lists:seq(0, MaxIters)),
io:format(user, "\n~s Sweet, private projections are stable\n", [machi_util:pretty_time()]), io:format(user, "\n~s Sweet, private projections are stable\n", [machi_util:pretty_time()]),
io:format(user, "\t~P\n", [get(stable), 24]), io:format(user, "\t~P\n", [get(stable), 14]),
io:format(user, "Rolling sanity check ... ", []), io:format(user, "Rolling sanity check ... ", []),
PrivProjs = [{Name, begin PrivProjs = [{Name, begin
{ok, Ps8} = ?FLU_PC:get_all_projections( {ok, Ps8} = ?FLU_PC:get_all_projections(
@ -308,9 +307,9 @@ convergence_demo_testfun(NumFLUs, MgrOpts0) ->
[{FLU, true} = {FLU, ?MGR:projection_transitions_are_sane_retrospective(Psx, FLU)} || [{FLU, true} = {FLU, ?MGR:projection_transitions_are_sane_retrospective(Psx, FLU)} ||
{FLU, Psx} <- PrivProjs] {FLU, Psx} <- PrivProjs]
catch catch
_Err:_What when CMode == cp_mode -> _Err:_What when CpApMode == cp_mode ->
io:format(user, "none proj skip detected, TODO? ", []); io:format(user, "none proj skip detected, TODO? ", []);
_Err:_What when CMode == ap_mode -> _Err:_What when CpApMode == ap_mode ->
io:format(user, "PrivProjs ~p\n", [PrivProjs]), io:format(user, "PrivProjs ~p\n", [PrivProjs]),
exit({line, ?LINE, _Err, _What}) exit({line, ?LINE, _Err, _What})
end, end,
@ -376,9 +375,9 @@ timer:sleep(1234),
{FLU, Psx} <- PrivProjs], {FLU, Psx} <- PrivProjs],
io:format(user, "\nAll sanity checks pass, hooray!\n", []) io:format(user, "\nAll sanity checks pass, hooray!\n", [])
catch catch
_Err:_What when CMode == cp_mode -> _Err:_What when CpApMode == cp_mode ->
io:format(user, "none proj skip detected, TODO? ", []); io:format(user, "none proj skip detected, TODO? ", []);
_Err:_What when CMode == ap_mode -> _Err:_What when CpApMode == ap_mode ->
io:format(user, "Report ~p\n", [Report]), io:format(user, "Report ~p\n", [Report]),
io:format(user, "PrivProjs ~p\n", [PrivProjs]), io:format(user, "PrivProjs ~p\n", [PrivProjs]),
exit({line, ?LINE, _Err, _What}) exit({line, ?LINE, _Err, _What})
@ -395,8 +394,7 @@ timer:sleep(1234),
exit(SupPid, normal), exit(SupPid, normal),
ok = machi_partition_simulator:stop(), ok = machi_partition_simulator:stop(),
[ok = ?FLU_PC:quit(PPid) || {_, PPid} <- Namez], [ok = ?FLU_PC:quit(PPid) || {_, PPid} <- Namez],
machi_util:wait_for_death(SupPid, 100), machi_util:wait_for_death(SupPid, 100)
[application:start(App) || App <- lists:reverse(Apps)]
end. end.
%% Many of the static partition lists below have been problematic at one %% Many of the static partition lists below have been problematic at one
@ -721,7 +719,7 @@ private_projections_are_stable(Namez, PollFunc) ->
true true
end, end,
%% io:format(user, "\nPriv1 ~p\nPriv2 ~p\n1==2 ~w ap_disjoint ~w u_all_peers ~w cp_mode_agree ~w\n", [lists:sort(Private1), lists:sort(Private2), Private1 == Private2, AP_mode_disjoint_test_p, Unanimous_with_all_peers_p, CP_mode_agree_test_p]), io:format(user, "\nPriv1 ~p\nPriv2 ~p\n1==2 ~w ap_disjoint ~w u_all_peers ~w cp_mode_agree ~w\n", [lists:sort(Private1), lists:sort(Private2), Private1 == Private2, AP_mode_disjoint_test_p, Unanimous_with_all_peers_p, CP_mode_agree_test_p]),
Private1 == Private2 andalso Private1 == Private2 andalso
AP_mode_disjoint_test_p andalso AP_mode_disjoint_test_p andalso
( (

View file

@ -273,80 +273,78 @@ make_prop_ets() ->
-endif. % EQC -endif. % EQC
make_advance_fun(FitList, FLUList, MgrList, Num) ->
fun() ->
[begin
[catch machi_fitness:trigger_early_adjustment(Fit, Tgt) ||
Fit <- FitList,
Tgt <- FLUList ],
[catch ?MGR:trigger_react_to_env(Mgr) || Mgr <- MgrList],
ok
end || _ <- lists:seq(1, Num)]
end.
smoke0_test() -> smoke0_test() ->
{ok, _} = machi_partition_simulator:start_link({1,2,3}, 50, 50),
Host = "localhost",
TcpPort = 6623, TcpPort = 6623,
{[Pa], [M0], _Dirs} = machi_test_util:start_flu_packages( {ok, FLUa} = machi_flu1:start_link([{a,TcpPort,"./data.a"}]),
1, TcpPort, "./data.", []), Pa = #p_srvr{name=a, address=Host, port=TcpPort},
Members_Dict = machi_projection:make_members_dict([Pa]),
%% Egadz, more racing on startup, yay. TODO fix.
timer:sleep(1),
{ok, FLUaP} = ?FLU_PC:start_link(Pa), {ok, FLUaP} = ?FLU_PC:start_link(Pa),
{ok, M0} = ?MGR:start_link(a, Members_Dict, [{active_mode, false}]),
_SockA = machi_util:connect(Host, TcpPort),
try try
pong = ?MGR:ping(M0) pong = ?MGR:ping(M0)
after after
ok = ?MGR:stop(M0),
ok = machi_flu1:stop(FLUa),
ok = ?FLU_PC:quit(FLUaP), ok = ?FLU_PC:quit(FLUaP),
machi_test_util:stop_flu_packages() ok = machi_partition_simulator:stop()
end. end.
smoke1_test_() -> smoke1_test() ->
{timeout, 1*60, fun() -> smoke1_test2() end}. machi_partition_simulator:start_link({1,2,3}, 100, 0),
smoke1_test2() ->
TcpPort = 62777, TcpPort = 62777,
MgrOpts = [{active_mode,false}], FluInfo = [{a,TcpPort+0,"./data.a"}, {b,TcpPort+1,"./data.b"}, {c,TcpPort+2,"./data.c"}],
try P_s = [#p_srvr{name=Name, address="localhost", port=Port} ||
{Ps, MgrNames, _Dirs} = machi_test_util:start_flu_packages( {Name,Port,_Dir} <- FluInfo],
3, TcpPort, "./data.", MgrOpts),
MembersDict = machi_projection:make_members_dict(Ps),
[machi_chain_manager1:set_chain_members(M, MembersDict) || M <- MgrNames],
Ma = hd(MgrNames),
{ok, P1} = ?MGR:test_calc_projection(Ma, false), [machi_flu1_test:clean_up_data_dir(Dir) || {_,_,Dir} <- FluInfo],
FLUs = [element(2, machi_flu1:start_link([{Name,Port,Dir}])) ||
{Name,Port,Dir} <- FluInfo],
MembersDict = machi_projection:make_members_dict(P_s),
{ok, M0} = ?MGR:start_link(a, MembersDict, [{active_mode,false}]),
try
{ok, P1} = ?MGR:test_calc_projection(M0, false),
% DERP! Check for race with manager's proxy vs. proj listener % DERP! Check for race with manager's proxy vs. proj listener
ok = lists:foldl( case ?MGR:test_read_latest_public_projection(M0, false) of
fun(_, {_,{true,[{c,ok},{b,ok},{a,ok}]}}) -> {error, partition} -> timer:sleep(500);
ok; % Short-circuit remaining attempts _ -> ok
(_, ok) -> end,
ok; % Skip remaining! {remote_write_results,{true,[{c,ok},{b,ok},{a,ok}]}} =
(_, _Else) -> ?MGR:test_write_public_projection(M0, P1),
timer:sleep(10), {unanimous, P1, Extra1} = ?MGR:test_read_latest_public_projection(M0, false),
?MGR:test_write_public_projection(Ma, P1)
end, not_ok, lists:seq(1, 1000)),
%% Writing the exact same projection multiple times returns ok:
%% no change!
{_,{true,[{c,ok},{b,ok},{a,ok}]}} = ?MGR:test_write_public_projection(Ma, P1),
{unanimous, P1, Extra1} = ?MGR:test_read_latest_public_projection(Ma, false),
ok ok
after after
machi_test_util:stop_flu_packages() ok = ?MGR:stop(M0),
[ok = machi_flu1:stop(X) || X <- FLUs],
ok = machi_partition_simulator:stop()
end. end.
nonunanimous_setup_and_fix_test_() -> nonunanimous_setup_and_fix_test() ->
os:cmd("rm -f /tmp/moomoo.*"), machi_partition_simulator:start_link({1,2,3}, 100, 0),
{timeout, 1*60, fun() -> nonunanimous_setup_and_fix_test2() end}.
nonunanimous_setup_and_fix_test2() ->
TcpPort = 62877, TcpPort = 62877,
MgrOpts = [{active_mode,false}], FluInfo = [{a,TcpPort+0,"./data.a"}, {b,TcpPort+1,"./data.b"}],
{Ps, [Ma,Mb,Mc], Dirs} = machi_test_util:start_flu_packages( P_s = [#p_srvr{name=Name, address="localhost", port=Port} ||
3, TcpPort, "./data.", MgrOpts), {Name,Port,_Dir} <- FluInfo],
MembersDict = machi_projection:make_members_dict(Ps),
ChainName = my_little_chain,
[machi_chain_manager1:set_chain_members(M, ChainName, 0, ap_mode,
MembersDict, []) || M <- [Ma, Mb]],
[Proxy_a, Proxy_b, Proxy_c] = Proxies =
[element(2, ?FLU_PC:start_link(P)) || P <- Ps],
[machi_flu1_test:clean_up_data_dir(Dir) || {_,_,Dir} <- FluInfo],
{ok, SupPid} = machi_flu_sup:start_link(),
Opts = [{active_mode, false}],
%% {ok, Mb} = ?MGR:start_link(b, MembersDict, [{active_mode, false}]++XX),
[{ok,_}=machi_flu_psup:start_flu_package(Name, Port, Dir, Opts) ||
{Name,Port,Dir} <- FluInfo],
FLUs = [machi_flu_psup:make_flu_regname(Name) ||
{Name,_Port,_Dir} <- FluInfo],
[Proxy_a, Proxy_b] = Proxies =
[element(2,?FLU_PC:start_link(P)) || P <- P_s],
MembersDict = machi_projection:make_members_dict(P_s),
[Ma,Mb] = [a_chmgr, b_chmgr],
ok = machi_chain_manager1:set_chain_members(Ma, MembersDict, []),
ok = machi_chain_manager1:set_chain_members(Mb, MembersDict, []),
try try
{ok, P1} = ?MGR:test_calc_projection(Ma, false), {ok, P1} = ?MGR:test_calc_projection(Ma, false),
@ -383,114 +381,16 @@ nonunanimous_setup_and_fix_test2() ->
{ok, P2pb} = ?FLU_PC:read_latest_projection(Proxy_b, private), {ok, P2pb} = ?FLU_PC:read_latest_projection(Proxy_b, private),
P2 = P2pb#projection_v1{dbg2=[]}, P2 = P2pb#projection_v1{dbg2=[]},
Mgrs = [a_chmgr, b_chmgr, c_chmgr], %% Pspam = machi_projection:update_checksum(
Advance = make_advance_fun([a_fitness,b_fitness,c_fitness], %% P1b#projection_v1{epoch_number=?SPAM_PROJ_EPOCH,
[a,b,c], %% dbg=[hello_spam]}),
Mgrs, %% ok = ?FLU_PC:write_projection(Proxy_b, public, Pspam),
3),
Advance(),
{_, _, TheEpoch_3} = ?MGR:trigger_react_to_env(Ma),
{_, _, TheEpoch_3} = ?MGR:trigger_react_to_env(Mb),
{_, _, TheEpoch_3} = ?MGR:trigger_react_to_env(Mc),
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Remove 'a' from the chain.\n", []),
MembersDict4 = machi_projection:make_members_dict(tl(Ps)),
ok = machi_chain_manager1:set_chain_members(
Mb, ChainName, TheEpoch_3, ap_mode, MembersDict4, []),
Advance(),
{ok, {true, _,_,_}} = ?FLU_PC:wedge_status(Proxy_a),
{_, _, TheEpoch_4} = ?MGR:trigger_react_to_env(Mb),
{_, _, TheEpoch_4} = ?MGR:trigger_react_to_env(Mc),
[{ok, #projection_v1{upi=[b,c], repairing=[]}} =
?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- tl(Proxies)],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Add a to the chain again (a is running).\n", []),
MembersDict5 = machi_projection:make_members_dict(Ps),
ok = machi_chain_manager1:set_chain_members(
Mb, ChainName, TheEpoch_4, ap_mode, MembersDict5, []),
Advance(),
{_, _, TheEpoch_5} = ?MGR:trigger_react_to_env(Ma),
{_, _, TheEpoch_5} = ?MGR:trigger_react_to_env(Mb),
{_, _, TheEpoch_5} = ?MGR:trigger_react_to_env(Mc),
[{ok, #projection_v1{upi=[b,c], repairing=[a]}} =
?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- Proxies],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Stop a while a chain member, advance b&c.\n", []),
ok = machi_flu_psup:stop_flu_package(a),
Advance(),
{_, _, TheEpoch_6} = ?MGR:trigger_react_to_env(Mb),
{_, _, TheEpoch_6} = ?MGR:trigger_react_to_env(Mc),
[{ok, #projection_v1{upi=[b,c], repairing=[]}} =
?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- tl(Proxies)],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Remove 'a' from the chain.\n", []),
MembersDict7 = machi_projection:make_members_dict(tl(Ps)),
ok = machi_chain_manager1:set_chain_members(
Mb, ChainName, TheEpoch_6, ap_mode, MembersDict7, []),
Advance(),
{_, _, TheEpoch_7} = ?MGR:trigger_react_to_env(Mb),
{_, _, TheEpoch_7} = ?MGR:trigger_react_to_env(Mc),
[{ok, #projection_v1{upi=[b,c], repairing=[]}} =
?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- tl(Proxies)],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Start a, advance.\n", []),
Opts = [{active_mode, false}, {initial_wedged, true}],
#p_srvr{name=NameA} = hd(Ps),
{ok,_}=machi_flu_psup:start_flu_package(NameA, TcpPort+1, hd(Dirs), Opts),
Advance(),
{ok, {true, _,_,_}} = ?FLU_PC:wedge_status(Proxy_a),
{ok, {false, EpochID_8,_,_}} = ?FLU_PC:wedge_status(Proxy_b),
{ok, {false, EpochID_8,_,_}} = ?FLU_PC:wedge_status(Proxy_c),
[{ok, #projection_v1{upi=[b,c], repairing=[]}} =
?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- tl(Proxies)],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Stop a, delete a's data, leave it stopped\n", []),
ok = machi_flu_psup:stop_flu_package(a),
Advance(),
machi_flu1_test:clean_up_data_dir(hd(Dirs)),
{ok, {false, _,_,_}} = ?FLU_PC:wedge_status(Proxy_b),
{ok, {false, _,_,_}} = ?FLU_PC:wedge_status(Proxy_c),
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Add a to the chain again (a is stopped).\n", []),
MembersDict9 = machi_projection:make_members_dict(Ps),
{_, _, TheEpoch_9} = ?MGR:trigger_react_to_env(Mb),
ok = machi_chain_manager1:set_chain_members(
Mb, ChainName, TheEpoch_9, ap_mode, MembersDict9, []),
Advance(),
{_, _, TheEpoch_9b} = ?MGR:trigger_react_to_env(Mb),
true = (TheEpoch_9b > TheEpoch_9),
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("STEP: Start a, and it joins like it ought to\n", []),
{ok,_}=machi_flu_psup:start_flu_package(NameA, TcpPort+1, hd(Dirs), Opts),
Advance(),
{ok, {false, {TheEpoch10,_},_,_}} = ?FLU_PC:wedge_status(Proxy_a),
{ok, {false, {TheEpoch10,_},_,_}} = ?FLU_PC:wedge_status(Proxy_b),
{ok, {false, {TheEpoch10,_},_,_}} = ?FLU_PC:wedge_status(Proxy_c),
[{ok, #projection_v1{upi=[b,c], repairing=[a]}} =
?FLU_PC:read_latest_projection(Pxy, private) || Pxy <- Proxies],
ok ok
after after
exit(SupPid, normal),
[ok = ?FLU_PC:quit(X) || X <- Proxies], [ok = ?FLU_PC:quit(X) || X <- Proxies],
machi_test_util:stop_flu_packages() ok = machi_partition_simulator:stop()
end. end.
unanimous_report_test() -> unanimous_report_test() ->

View file

@ -1,68 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
-module(machi_cinfo_test).
-ifdef(TEST).
-ifndef(PULSE).
-include_lib("eunit/include/eunit.hrl").
-include("machi_projection.hrl").
%% smoke_test() will try just dump cluster_info and call each functions
smoke_test_() ->
{setup,
fun setup/0,
fun cleanup/1,
[
fun() -> machi_cinfo:public_projection(a) end,
fun() -> machi_cinfo:private_projection(a) end,
fun() -> machi_cinfo:fitness(a) end,
fun() -> machi_cinfo:chain_manager(a) end,
fun() -> machi_cinfo:flu1(a) end,
fun() -> machi_cinfo:dump() end
]}.
setup() ->
machi_cinfo:register(),
Ps = [{a,#p_srvr{name=a, address="localhost", port=5555, props="./data.a"}},
{b,#p_srvr{name=b, address="localhost", port=5556, props="./data.b"}},
{c,#p_srvr{name=c, address="localhost", port=5557, props="./data.c"}}
],
[os:cmd("rm -rf " ++ P#p_srvr.props) || {_,P} <- Ps],
{ok, SupPid} = machi_sup:start_link(),
%% Only run a, don't run b & c so we have 100% failures talking to them
[begin
#p_srvr{name=Name, port=Port, props=Dir} = P,
{ok, _} = machi_flu_psup:start_flu_package(Name, Port, Dir, [])
end || {_,P} <- [hd(Ps)]],
machi_chain_manager1:set_chain_members(a_chmgr, orddict:from_list(Ps)),
{SupPid, Ps}.
cleanup({SupPid, Ps}) ->
exit(SupPid, normal),
[os:cmd("rm -rf " ++ P#p_srvr.props) || {_,P} <- Ps],
machi_util:wait_for_death(SupPid, 100),
ok.
-endif. % !PULSE
-endif. % TEST

View file

@ -32,7 +32,6 @@ smoke_test_() -> {timeout, 1*60, fun() -> smoke_test2() end}.
setup_smoke_test(Host, PortBase, Os, Witness_list) -> setup_smoke_test(Host, PortBase, Os, Witness_list) ->
os:cmd("rm -rf ./data.a ./data.b ./data.c"), os:cmd("rm -rf ./data.a ./data.b ./data.c"),
{ok, _} = machi_util:wait_for_life(machi_flu_sup, 100),
F = fun(X) -> case lists:member(X, Witness_list) of F = fun(X) -> case lists:member(X, Witness_list) of
true -> true ->
@ -58,15 +57,9 @@ setup_smoke_test(Host, PortBase, Os, Witness_list) ->
%% 4. Wait until all others are using epoch id from #3. %% 4. Wait until all others are using epoch id from #3.
%% %%
%% Damn, this is a pain to make 100% deterministic, bleh. %% Damn, this is a pain to make 100% deterministic, bleh.
CMode = if Witness_list == [] -> ap_mode; ok = machi_chain_manager1:set_chain_members(a_chmgr, D, Witness_list),
Witness_list /= [] -> cp_mode ok = machi_chain_manager1:set_chain_members(b_chmgr, D, Witness_list),
end, ok = machi_chain_manager1:set_chain_members(c_chmgr, D, Witness_list),
ok = machi_chain_manager1:set_chain_members(a_chmgr, ch0, 0, CMode,
D, Witness_list),
ok = machi_chain_manager1:set_chain_members(b_chmgr, ch0, 0, CMode,
D, Witness_list),
ok = machi_chain_manager1:set_chain_members(c_chmgr, ch0, 0, CMode,
D, Witness_list),
run_ticks([a_chmgr,b_chmgr,c_chmgr]), run_ticks([a_chmgr,b_chmgr,c_chmgr]),
%% Everyone is settled on the same damn epoch id. %% Everyone is settled on the same damn epoch id.
{ok, EpochID} = machi_flu1_client:get_latest_epochid(Host, PortBase+0, {ok, EpochID} = machi_flu1_client:get_latest_epochid(Host, PortBase+0,
@ -102,13 +95,11 @@ run_ticks(MgrList) ->
ok. ok.
smoke_test2() -> smoke_test2() ->
{ok, SupPid} = machi_sup:start_link(), {ok, SupPid} = machi_flu_sup:start_link(),
error_logger:tty(false), error_logger:tty(false),
try try
Prefix = <<"pre">>, Prefix = <<"pre">>,
Chunk1 = <<"yochunk">>, Chunk1 = <<"yochunk">>,
NSInfo = undefined,
NoCSum = <<>>,
Host = "localhost", Host = "localhost",
PortBase = 64454, PortBase = 64454,
Os = [{ignore_stability_time, true}, {active_mode, false}], Os = [{ignore_stability_time, true}, {active_mode, false}],
@ -116,119 +107,99 @@ smoke_test2() ->
%% Whew ... ok, now start some damn tests. %% Whew ... ok, now start some damn tests.
{ok, C1} = machi_cr_client:start_link([P || {_,P}<-orddict:to_list(D)]), {ok, C1} = machi_cr_client:start_link([P || {_,P}<-orddict:to_list(D)]),
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), machi_cr_client:append_chunk(C1, Prefix, Chunk1),
{ok, {Off1,Size1,File1}} = {ok, {Off1,Size1,File1}} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), machi_cr_client:append_chunk(C1, Prefix, Chunk1),
BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, "foo")}, Chunk1_badcs = {<<?CSUM_TAG_CLIENT_SHA:8, 0:(8*20)>>, Chunk1},
{error, bad_checksum} = {error, bad_checksum} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), machi_cr_client:append_chunk(C1, Prefix, Chunk1_badcs),
{ok, {[{_, Off1, Chunk1, _}], []}} = {ok, Chunk1} = machi_cr_client:read_chunk(C1, File1, Off1, Size1),
machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, undefined),
{ok, PPP} = machi_flu1_client:read_latest_projection(Host, PortBase+0, {ok, PPP} = machi_flu1_client:read_latest_projection(Host, PortBase+0,
private), private),
%% Verify that the client's CR wrote to all of them. %% Verify that the client's CR wrote to all of them.
[{ok, {[{_, Off1, Chunk1, _}], []}} = [{ok, Chunk1} = machi_flu1_client:read_chunk(
machi_flu1_client:read_chunk( Host, PortBase+X, EpochID, File1, Off1, Size1) ||
Host, PortBase+X, NSInfo, EpochID, File1, Off1, Size1, undefined) ||
X <- [0,1,2] ], X <- [0,1,2] ],
%% Test read repair: Manually write to head, then verify that %% Test read repair: Manually write to head, then verify that
%% read-repair fixes all. %% read-repair fixes all.
FooOff1 = Off1 + (1024*1024), FooOff1 = Off1 + (1024*1024),
[{error, not_written} = machi_flu1_client:read_chunk( [{error, not_written} = machi_flu1_client:read_chunk(
Host, PortBase+X, NSInfo, EpochID, Host, PortBase+X, EpochID,
File1, FooOff1, Size1, undefined) || X <- [0,1,2] ], File1, FooOff1, Size1) || X <- [0,1,2] ],
ok = machi_flu1_client:write_chunk(Host, PortBase+0, NSInfo, EpochID, ok = machi_flu1_client:write_chunk(Host, PortBase+0, EpochID,
File1, FooOff1, Chunk1, NoCSum), File1, FooOff1, Chunk1),
{ok, {[{File1, FooOff1, Chunk1, _}=_YY], []}} = {ok, Chunk1} = machi_cr_client:read_chunk(C1, File1, FooOff1, Size1),
machi_flu1_client:read_chunk(Host, PortBase+0, NSInfo, EpochID, [{X,{ok, Chunk1}} = {X,machi_flu1_client:read_chunk(
File1, FooOff1, Size1, undefined), Host, PortBase+X, EpochID,
{ok, {[{File1, FooOff1, Chunk1, _}], []}} = File1, FooOff1, Size1)} || X <- [0,1,2] ],
machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff1, Size1, undefined),
[?assertMatch({X,{ok, {[{_, FooOff1, Chunk1, _}], []}}},
{X,machi_flu1_client:read_chunk(
Host, PortBase+X, NSInfo, EpochID,
File1, FooOff1, Size1, undefined)})
|| X <- [0,1,2] ],
%% Test read repair: Manually write to middle, then same checking. %% Test read repair: Manually write to middle, then same checking.
FooOff2 = Off1 + (2*1024*1024), FooOff2 = Off1 + (2*1024*1024),
Chunk2 = <<"Middle repair chunk">>, Chunk2 = <<"Middle repair chunk">>,
Size2 = size(Chunk2), Size2 = size(Chunk2),
ok = machi_flu1_client:write_chunk(Host, PortBase+1, NSInfo, EpochID, ok = machi_flu1_client:write_chunk(Host, PortBase+1, EpochID,
File1, FooOff2, Chunk2, NoCSum), File1, FooOff2, Chunk2),
{ok, {[{File1, FooOff2, Chunk2, _}], []}} = {ok, Chunk2} = machi_cr_client:read_chunk(C1, File1, FooOff2, Size2),
machi_cr_client:read_chunk(C1, NSInfo, File1, FooOff2, Size2, undefined), [{X,{ok, Chunk2}} = {X,machi_flu1_client:read_chunk(
[{X,{ok, {[{File1, FooOff2, Chunk2, _}], []}}} = Host, PortBase+X, EpochID,
{X,machi_flu1_client:read_chunk( File1, FooOff2, Size2)} || X <- [0,1,2] ],
Host, PortBase+X, NSInfo, EpochID,
File1, FooOff2, Size2, undefined)} || X <- [0,1,2] ],
%% Misc API smoke & minor regression checks %% Misc API smoke & minor regression checks
{error, bad_arg} = machi_cr_client:read_chunk(C1, NSInfo, <<"no">>, {error, bad_arg} = machi_cr_client:read_chunk(C1, <<"no">>,
999999999, 1, undefined), 999999999, 1),
{ok, {[{File1,Off1,Chunk1,_}, {File1,FooOff1,Chunk1,_}, {File1,FooOff2,Chunk2,_}], {error, not_written} = machi_cr_client:read_chunk(C1, File1,
[]}} = Off1, 88888888),
machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, 88888888, undefined),
%% Checksum list return value is a primitive binary(). %% Checksum list return value is a primitive binary().
{ok, KludgeBin} = machi_cr_client:checksum_list(C1, File1), {ok, KludgeBin} = machi_cr_client:checksum_list(C1, File1),
true = is_binary(KludgeBin), true = is_binary(KludgeBin),
{error, bad_arg} = machi_cr_client:checksum_list(C1, <<"!!!!">>), {error, bad_arg} = machi_cr_client:checksum_list(C1, <<"!!!!">>),
io:format(user, "\nFiles = ~p\n", [machi_cr_client:list_files(C1)]), %% Exactly one file right now
%% Exactly one file right now, e.g.,
%% {ok,[{2098202,<<"pre^b144ef13-db4d-4c9f-96e7-caff02dc754f^1">>}]}
{ok, [_]} = machi_cr_client:list_files(C1), {ok, [_]} = machi_cr_client:list_files(C1),
%% Go back and test append_chunk() + extra and write_chunk() %% Go back and test append_chunk_extra() and write_chunk()
Chunk10 = <<"It's a different chunk!">>, Chunk10 = <<"It's a different chunk!">>,
Size10 = byte_size(Chunk10), Size10 = byte_size(Chunk10),
Extra10 = 5, Extra10 = 5,
Opts1 = #append_opts{chunk_extra=Extra10*Size10},
{ok, {Off10,Size10,File10}} = {ok, {Off10,Size10,File10}} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, machi_cr_client:append_chunk_extra(C1, Prefix, Chunk10,
NoCSum, Opts1), Extra10 * Size10),
{ok, {[{_, Off10, Chunk10, _}], []}} = {ok, Chunk10} = machi_cr_client:read_chunk(C1, File10, Off10, Size10),
machi_cr_client:read_chunk(C1, NSInfo, File10, Off10, Size10, undefined),
[begin [begin
Offx = Off10 + (Seq * Size10), Offx = Off10 + (Seq * Size10),
%% TODO: uncomment written/not_written enforcement is available. %% TODO: uncomment written/not_written enforcement is available.
%% {error,not_written} = machi_cr_client:read_chunk(C1, NSInfo, File10, %% {error,not_written} = machi_cr_client:read_chunk(C1, File10,
%% Offx, Size10), %% Offx, Size10),
{ok, {Offx,Size10,File10}} = {ok, {Offx,Size10,File10}} =
machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10, NoCSum), machi_cr_client:write_chunk(C1, File10, Offx, Chunk10),
{ok, {[{_, Offx, Chunk10, _}], []}} = {ok, Chunk10} = machi_cr_client:read_chunk(C1, File10, Offx,
machi_cr_client:read_chunk(C1, NSInfo, File10, Offx, Size10, undefined) Size10)
end || Seq <- lists:seq(1, Extra10)], end || Seq <- lists:seq(1, Extra10)],
{ok, {Off11,Size11,File11}} = {ok, {Off11,Size11,File11}} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk10, NoCSum), machi_cr_client:append_chunk(C1, Prefix, Chunk10),
%% %% Double-check that our reserved extra bytes were really honored! %% Double-check that our reserved extra bytes were really honored!
%% true = (Off11 > (Off10 + (Extra10 * Size10))), true = (Off11 > (Off10 + (Extra10 * Size10))),
io:format(user, "\nFiles = ~p\n", [machi_cr_client:list_files(C1)]),
ok ok
after after
exit(SupPid, normal),
machi_util:wait_for_death(SupPid, 100),
error_logger:tty(true), error_logger:tty(true),
catch application:stop(machi) catch application:stop(machi),
exit(SupPid, normal)
end. end.
witness_smoke_test_() -> {timeout, 1*60, fun() -> witness_smoke_test2() end}. witness_smoke_test_() -> {timeout, 1*60, fun() -> witness_smoke_test2() end}.
witness_smoke_test2() -> witness_smoke_test2() ->
SupPid = case machi_sup:start_link() of SupPid = case machi_flu_sup:start_link() of
{ok, P} -> P; {ok, P} -> P;
{error, {already_started, P1}} -> P1; {error, {already_started, P1}} -> P1;
Other -> error(Other) Other -> error(Other)
end, end,
%% TODO: I wonder why commenting this out makes this test pass error_logger:tty(false),
%% error_logger:tty(true),
try try
Prefix = <<"pre">>, Prefix = <<"pre">>,
Chunk1 = <<"yochunk">>, Chunk1 = <<"yochunk">>,
NSInfo = undefined,
NoCSum = <<>>,
Host = "localhost", Host = "localhost",
PortBase = 64444, PortBase = 64444,
Os = [{ignore_stability_time, true}, {active_mode, false}, Os = [{ignore_stability_time, true}, {active_mode, false},
@ -238,15 +209,13 @@ witness_smoke_test2() ->
%% Whew ... ok, now start some damn tests. %% Whew ... ok, now start some damn tests.
{ok, C1} = machi_cr_client:start_link([P || {_,P}<-orddict:to_list(D)]), {ok, C1} = machi_cr_client:start_link([P || {_,P}<-orddict:to_list(D)]),
{ok, _} = machi_cr_client:append_chunk(C1, NSInfo, Prefix, {ok, _} = machi_cr_client:append_chunk(C1, Prefix, Chunk1),
Chunk1, NoCSum),
{ok, {Off1,Size1,File1}} = {ok, {Off1,Size1,File1}} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum), machi_cr_client:append_chunk(C1, Prefix, Chunk1),
BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, "foo")}, Chunk1_badcs = {<<?CSUM_TAG_CLIENT_SHA:8, 0:(8*20)>>, Chunk1},
{error, bad_checksum} = {error, bad_checksum} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, BadCSum), machi_cr_client:append_chunk(C1, Prefix, Chunk1_badcs),
{ok, {[{_, Off1, Chunk1, _}], []}} = {ok, Chunk1} = machi_cr_client:read_chunk(C1, File1, Off1, Size1),
machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, undefined),
%% Stop 'b' and let the chain reset. %% Stop 'b' and let the chain reset.
ok = machi_flu_psup:stop_flu_package(b), ok = machi_flu_psup:stop_flu_package(b),
@ -259,25 +228,23 @@ witness_smoke_test2() ->
%% Let's wedge OurWitness and see what happens: timeout/partition. %% Let's wedge OurWitness and see what happens: timeout/partition.
#p_srvr{name=WitName, address=WitA, port=WitP} = #p_srvr{name=WitName, address=WitA, port=WitP} =
orddict:fetch(OurWitness, D), orddict:fetch(OurWitness, D),
{ok, {false, EpochID2,_,_}} = machi_flu1_client:wedge_status(WitA, WitP), {ok, {false, EpochID2}} = machi_flu1_client:wedge_status(WitA, WitP),
machi_flu1:wedge_myself(WitName, EpochID2), machi_flu1:wedge_myself(WitName, EpochID2),
case machi_flu1_client:wedge_status(WitA, WitP) of case machi_flu1_client:wedge_status(WitA, WitP) of
{ok, {true, EpochID2,_,_}} -> {ok, {true, EpochID2}} ->
ok; ok;
{ok, {false, EpochID2,_,_}} -> {ok, {false, EpochID2}} ->
%% This is racy. Work around it by sleeping a while. %% This is racy. Work around it by sleeping a while.
timer:sleep(6*1000), timer:sleep(6*1000),
{ok, {true, EpochID2,_,_}} = {ok, {true, EpochID2}} =
machi_flu1_client:wedge_status(WitA, WitP) machi_flu1_client:wedge_status(WitA, WitP)
end, end,
%% Chunk1 is still readable: not affected by wedged witness head. %% Chunk1 is still readable: not affected by wedged witness head.
{ok, {[{_, Off1, Chunk1, _}], []}} = {ok, Chunk1} = machi_cr_client:read_chunk(C1, File1, Off1, Size1),
machi_cr_client:read_chunk(C1, NSInfo, File1, Off1, Size1, undefined),
%% But because the head is wedged, an append will fail. %% But because the head is wedged, an append will fail.
{error, partition} = {error, partition} =
machi_cr_client:append_chunk(C1, NSInfo, Prefix, Chunk1, NoCSum, machi_cr_client:append_chunk(C1, Prefix, Chunk1, 1*1000),
#append_opts{}, 1*1000),
%% The witness's wedge status should cause timeout/partition %% The witness's wedge status should cause timeout/partition
%% for write_chunk also. %% for write_chunk also.
@ -286,7 +253,7 @@ witness_smoke_test2() ->
File10 = File1, File10 = File1,
Offx = Off1 + (1 * Size10), Offx = Off1 + (1 * Size10),
{error, partition} = {error, partition} =
machi_cr_client:write_chunk(C1, NSInfo, File10, Offx, Chunk10, NoCSum, 1*1000), machi_cr_client:write_chunk(C1, File10, Offx, Chunk10, 1*1000),
ok ok
after after

View file

@ -1,131 +1,31 @@
-module(machi_csum_table_test). -module(machi_csum_table_test).
-compile(export_all). -compile(export_all).
-include_lib("eunit/include/eunit.hrl").
-define(HDR, {0, 1024, none}).
cleanup(Dir) ->
os:cmd("rm -rf " ++ Dir).
smoke_test() -> smoke_test() ->
Filename = "./temp-checksum-dumb-file", Filename = "./temp-checksum-dumb-file",
_ = cleanup(Filename), _ = file:delete(Filename),
{ok, MC} = machi_csum_table:open(Filename, []), {ok, MC} = machi_csum_table:open(Filename, []),
?assertEqual([{1024, infinity}], {Offset, Size, Checksum} = {64, 34, <<"deadbeef">>},
machi_csum_table:calc_unwritten_bytes(MC)), {error, unknown_chunk} = machi_csum_table:find(MC, Offset, Size),
Entry = {Offset, Size, Checksum} = {1064, 34, <<"deadbeef">>},
[] = machi_csum_table:find(MC, Offset, Size),
ok = machi_csum_table:write(MC, Offset, Size, Checksum), ok = machi_csum_table:write(MC, Offset, Size, Checksum),
[{1024, 40}, {1098, infinity}] = machi_csum_table:calc_unwritten_bytes(MC), {ok, Checksum} = machi_csum_table:find(MC, Offset, Size),
?assertEqual([Entry], machi_csum_table:find(MC, Offset, Size)), ok = machi_csum_table:trim(MC, Offset, Size),
ok = machi_csum_table:trim(MC, Offset, Size, undefined, undefined), {error, trimmed} = machi_csum_table:find(MC, Offset, Size),
?assertEqual([{Offset, Size, trimmed}],
machi_csum_table:find(MC, Offset, Size)),
ok = machi_csum_table:close(MC), ok = machi_csum_table:close(MC),
ok = machi_csum_table:delete(MC). ok = machi_csum_table:delete(MC).
close_test() -> close_test() ->
Filename = "./temp-checksum-dumb-file-2", Filename = "./temp-checksum-dumb-file-2",
_ = cleanup(Filename), _ = file:delete(Filename),
{ok, MC} = machi_csum_table:open(Filename, []), {ok, MC} = machi_csum_table:open(Filename, []),
Entry = {Offset, Size, Checksum} = {1064, 34, <<"deadbeef">>}, {Offset, Size, Checksum} = {64, 34, <<"deadbeef">>},
[] = machi_csum_table:find(MC, Offset, Size), {error, unknown_chunk} = machi_csum_table:find(MC, Offset, Size),
ok = machi_csum_table:write(MC, Offset, Size, Checksum), ok = machi_csum_table:write(MC, Offset, Size, Checksum),
[Entry] = machi_csum_table:find(MC, Offset, Size), {ok, Checksum} = machi_csum_table:find(MC, Offset, Size),
ok = machi_csum_table:close(MC), ok = machi_csum_table:close(MC),
{ok, MC2} = machi_csum_table:open(Filename, []), {ok, MC2} = machi_csum_table:open(Filename, []),
[Entry] = machi_csum_table:find(MC2, Offset, Size), {ok, Checksum} = machi_csum_table:find(MC2, Offset, Size),
ok = machi_csum_table:trim(MC2, Offset, Size, undefined, undefined), ok = machi_csum_table:trim(MC2, Offset, Size),
[{Offset, Size, trimmed}] = machi_csum_table:find(MC2, Offset, Size), {error, trimmed} = machi_csum_table:find(MC2, Offset, Size),
ok = machi_csum_table:delete(MC2). ok = machi_csum_table:delete(MC2).
smoke2_test() ->
Filename = "./temp-checksum-dumb-file-3",
_ = cleanup(Filename),
{ok, MC} = machi_csum_table:open(Filename, []),
Entry = {Offset, Size, Checksum} = {1025, 10, <<"deadbeef">>},
ok = machi_csum_table:write(MC, Offset, Size, Checksum),
?assertEqual([], machi_csum_table:find(MC, 0, 0)),
?assertEqual([?HDR], machi_csum_table:find(MC, 0, 1)),
[Entry] = machi_csum_table:find(MC, Offset, Size),
[?HDR] = machi_csum_table:find(MC, 1, 1024),
?assertEqual([?HDR, Entry],
machi_csum_table:find(MC, 1023, 1024)),
[Entry] = machi_csum_table:find(MC, 1024, 1024),
[Entry] = machi_csum_table:find(MC, 1025, 1024),
ok = machi_csum_table:trim(MC, Offset, Size, undefined, undefined),
[{Offset, Size, trimmed}] = machi_csum_table:find(MC, Offset, Size),
ok = machi_csum_table:close(MC),
ok = machi_csum_table:delete(MC).
smoke3_test() ->
Filename = "./temp-checksum-dumb-file-4",
_ = cleanup(Filename),
{ok, MC} = machi_csum_table:open(Filename, []),
Scenario =
[%% Command, {Offset, Size, Csum}, LeftNeighbor, RightNeibor
{?LINE, write, {2000, 10, <<"heh">>}, undefined, undefined},
{?LINE, write, {3000, 10, <<"heh">>}, undefined, undefined},
{?LINE, write, {4000, 10, <<"heh2">>}, undefined, undefined},
{?LINE, write, {4000, 10, <<"heh2">>}, undefined, undefined},
{?LINE, write, {4005, 10, <<"heh3">>}, {4000, 5, <<"heh2">>}, undefined},
{?LINE, write, {4005, 10, <<"heh3">>}, undefined, undefined},
{?LINE, trim, {3005, 10, <<>>}, {3000, 5, <<"heh">>}, undefined},
{?LINE, trim, {2000, 10, <<>>}, undefined, undefined},
{?LINE, trim, {2005, 5, <<>>}, {2000, 5, trimmed}, undefined},
{?LINE, trim, {3000, 5, <<>>}, undefined, undefined},
{?LINE, trim, {4000, 10, <<>>}, undefined, {4010, 5, <<"heh3">>}},
{?LINE, trim, {4010, 5, <<>>}, undefined, undefined},
{?LINE, trim, {0, 1024, <<>>}, undefined, undefined}
],
[ begin
%% ?debugVal({_Line, Chunk}),
{Offset, Size, Csum} = Chunk,
?assertEqual(LeftN0,
machi_csum_table:find_leftneighbor(MC, Offset)),
?assertEqual(RightN0,
machi_csum_table:find_rightneighbor(MC, Offset+Size)),
LeftN = case LeftN0 of
{OffsL, SizeL, trimmed} -> {OffsL, SizeL, trimmed};
{OffsL, SizeL, _} -> {OffsL, SizeL, <<"boom">>};
OtherL -> OtherL
end,
RightN = case RightN0 of
{OffsR, SizeR, _} -> {OffsR, SizeR, <<"boot">>};
OtherR -> OtherR
end,
case Cmd of
write ->
ok = machi_csum_table:write(MC, Offset, Size, Csum,
LeftN, RightN);
trim ->
ok = machi_csum_table:trim(MC, Offset, Size,
LeftN, RightN)
end
end || {_Line, Cmd, Chunk, LeftN0, RightN0} <- Scenario ],
?assert(not machi_csum_table:all_trimmed(MC, 10000)),
machi_csum_table:trim(MC, 0, 10000, undefined, undefined),
?assert(machi_csum_table:all_trimmed(MC, 10000)),
ok = machi_csum_table:close(MC),
ok = machi_csum_table:delete(MC).
%% TODO: add quickcheck test here
%% Previous implementation
-spec all_trimmed2(machi_csum_table:table(),
non_neg_integer(), non_neg_integer()) -> boolean().
all_trimmed2(CsumT, Left, Right) ->
Chunks = machi_csum_table:find(CsumT, Left, Right),
runthru(Chunks, Left, Right).
%% @doc make sure all trimmed chunks are continously chained
%% TODO: test with EQC
runthru([], Pos, Pos) -> true;
runthru([], Pos0, Pos) when Pos0 < Pos -> false;
runthru([{Offset0, Size0, trimmed}|T], Offset, Pos) when Offset0 =< Offset ->
runthru(T, Offset0+Size0, Pos);
runthru(_L, _O, _P) ->
false.

View file

@ -31,18 +31,13 @@
-define(QC_OUT(P), -define(QC_OUT(P),
eqc:on_output(fun(Str, Args) -> io:format(user, Str, Args) end, P)). eqc:on_output(fun(Str, Args) -> io:format(user, Str, Args) end, P)).
-define(TESTDIR, "./eqc").
%% EUNIT TEST DEFINITION %% EUNIT TEST DEFINITION
eqc_test_() -> eqc_test_() ->
PropTimeout = case os:getenv("EQC_TIME") of {timeout, 60,
false -> 30;
V -> list_to_integer(V)
end,
{timeout, PropTimeout*2 + 30,
{spawn, {spawn,
[ [
?_assertEqual(true, eqc:quickcheck(eqc:testing_time(PropTimeout, ?QC_OUT(prop_ok())))) {timeout, 30, ?_assertEqual(true, eqc:quickcheck(eqc:testing_time(15, ?QC_OUT(prop_ok()))))}
] ]
}}. }}.
@ -93,14 +88,12 @@ data_with_csum(Limit) ->
intervals([]) -> intervals([]) ->
[]; [];
intervals([N]) -> intervals([N]) ->
[{N, choose(1,1)}]; [{N, choose(1,150)}];
intervals([A,B|T]) -> intervals([A,B|T]) ->
[{A, oneof([choose(1, B-A), B-A])}|intervals([B|T])]. [{A, choose(1, B-A)}|intervals([B|T])].
interval_list() -> interval_list() ->
?LET(L, ?LET(L, list(choose(1024, 4096)), intervals(lists:usort(L))).
oneof([list(choose(1025, 1033)), list(choose(1024, 4096))]),
intervals(lists:usort(L))).
shuffle_interval() -> shuffle_interval() ->
?LET(L, interval_list(), shuffle(L)). ?LET(L, interval_list(), shuffle(L)).
@ -110,100 +103,40 @@ get_written_interval(L) ->
%% INITIALIZATION %% INITIALIZATION
-record(state, {pid, prev_extra = 0, -record(state, {pid, prev_extra = 0, planned_writes=[], written=[]}).
filename = undefined,
planned_writes=[],
planned_trims=[],
written=[],
trimmed=[]}).
initial_state() -> initial_state() -> #state{written=[{0,1024}]}.
{_, _, MS} = os:timestamp(), initial_state(I) -> #state{written=[{0,1024}], planned_writes=I}.
Filename = test_server:temp_name("eqc_data") ++ "." ++ integer_to_list(MS),
#state{filename=Filename, written=[{0,1024}]}.
initial_state(I, T) ->
S=initial_state(),
S#state{written=[{0,1024}],
planned_writes=I,
planned_trims=T}.
weight(_S, rewrite) -> 1; weight(_S, rewrite) -> 1;
weight(_S, _) -> 2. weight(_S, _) -> 2.
%% HELPERS %% HELPERS
get_overlaps(_Offset, _Len, [], Acc) -> lists:reverse(Acc); %% check if an operation is permitted based on whether a write has
get_overlaps(Offset, Len, [{Pos, Sz} = Ck|T], Acc0) %% occurred
%% Overlap judgement differnt from the one in machi_csum_table check_writes(_Op, [], _Off, _L) ->
%% [a=Offset, b), [x=Pos, y) ... false;
when check_writes(_Op, [{Pos, Sz}|_T], Off, L) when Pos == Off
%% a =< x && x < b && b =< y andalso Sz == L ->
(Offset =< Pos andalso Pos < Offset + Len andalso Offset + Len =< Pos + Sz) orelse mostly_true;
%% a =< x && y < b check_writes(read, [{Pos, Sz}|_T], Off, L) when Off >= Pos
(Offset =< Pos andalso Pos + Sz < Offset + Len) orelse andalso Off < (Pos + Sz)
%% x < a && a < y && y =< b andalso Sz >= ( L - ( Off - Pos ) ) ->
(Pos < Offset andalso Offset < Pos + Sz andalso Pos + Sz =< Offset + Len) orelse true;
%% x < a && b < y check_writes(write, [{Pos, Sz}|_T], Off, L) when ( Off + L ) > Pos
(Pos < Offset + Len andalso Offset + Len < Pos + Sz) -> andalso Off < (Pos + Sz) ->
get_overlaps(Offset, Len, T, [Ck|Acc0]); true;
get_overlaps(Offset, Len, [_Ck|T], Acc0) -> check_writes(Op, [_H|T], Off, L) ->
get_overlaps(Offset, Len, T, Acc0). check_writes(Op, T, Off, L).
%% Inefficient but simple easy code to verify by eyes - returns all
%% bytes that fits in (Offset, Len)
chop(Offset, Len, List) ->
ChopLeft = fun({Pos, Sz}) when Pos < Offset andalso Offset =< Pos + Sz ->
{Offset, Sz + Pos - Offset};
({Pos, Sz}) when Offset =< Pos andalso Pos + Sz < Offset + Len ->
{Pos, Sz};
({Pos, _Sz}) when Offset =< Pos ->
{Pos, Offset + Len - Pos}
end,
ChopRight = fun({Pos, Sz}) when Offset + Len < Pos + Sz ->
{Pos, Offset + Len - Pos};
({Pos, Sz}) ->
{Pos, Sz}
end,
Filter0 = fun({_, 0}) -> false;
(Other) -> {true, Other} end,
lists:filtermap(fun(E) -> Filter0(ChopRight(ChopLeft(E))) end,
List).
%% Returns all bytes that are at left side of the Offset
chopped_left(_Offset, []) -> undefined;
chopped_left(Offset, [{Pos,_Sz}|_]) when Pos < Offset ->
{Pos, Offset - Pos};
chopped_left(_, _) ->
undefined.
chopped_right(_Offset, []) -> undefined;
chopped_right(Offset, List) ->
{Pos, Sz} = lists:last(List),
if Offset < Pos + Sz ->
{Offset, Pos + Sz - Offset};
true ->
undefined
end.
cleanup_chunk(Offset, Length, ChunkList) ->
Overlaps = get_overlaps(Offset, Length, ChunkList, []),
NewCL0 = lists:foldl(fun lists:delete/2,
ChunkList, Overlaps),
NewCL1 = case chopped_left(Offset, Overlaps) of
undefined -> NewCL0;
LeftRemain -> [LeftRemain|NewCL0]
end,
NewCL2 = case chopped_right(Offset+Length, Overlaps) of
undefined -> NewCL1;
RightRemain -> [RightRemain|NewCL1]
end,
lists:sort(NewCL2).
is_error({error, _}) -> true; is_error({error, _}) -> true;
is_error({error, _, _}) -> true; is_error({error, _, _}) -> true;
is_error(Other) -> {expected_ERROR, Other}. is_error(Other) -> {expected_ERROR, Other}.
probably_error(ok) -> true;
probably_error(V) -> is_error(V).
is_ok({ok, _, _}) -> true; is_ok({ok, _, _}) -> true;
is_ok(ok) -> true; is_ok(ok) -> true;
is_ok(Other) -> {expected_OK, Other}. is_ok(Other) -> {expected_OK, Other}.
@ -211,10 +144,11 @@ is_ok(Other) -> {expected_OK, Other}.
get_offset({ok, _Filename, Offset}) -> Offset; get_offset({ok, _Filename, Offset}) -> Offset;
get_offset(_) -> error(badarg). get_offset(_) -> error(badarg).
last_byte([]) -> 0; offset_valid(Offset, Extra, L) ->
last_byte(L0) -> {Pos, Sz} = lists:last(L),
L1 = lists:map(fun({Pos, Sz}) -> Pos + Sz end, L0), Offset == Pos + Sz + Extra.
lists:last(lists:sort(L1)).
-define(TESTDIR, "./eqc").
cleanup() -> cleanup() ->
[begin [begin
@ -228,17 +162,19 @@ cleanup() ->
%% start %% start
start_pre(S) -> start_pre(S) ->
S#state.pid =:= undefined. S#state.pid == undefined.
start_command(S) -> start_command(S) ->
{call, ?MODULE, start, [S]}. {call, ?MODULE, start, [S]}.
start(#state{filename=File}) -> start(_S) ->
{ok, Pid} = machi_file_proxy:start_link(some_flu, File, ?TESTDIR), {_, _, MS} = os:timestamp(),
File = test_server:temp_name("eqc_data") ++ "." ++ integer_to_list(MS),
{ok, Pid} = machi_file_proxy:start_link(File, ?TESTDIR),
unlink(Pid), unlink(Pid),
Pid. Pid.
start_next(S, Pid, _) -> start_next(S, Pid, _Args) ->
S#state{pid = Pid}. S#state{pid = Pid}.
%% read %% read
@ -247,34 +183,25 @@ read_pre(S) ->
S#state.pid /= undefined. S#state.pid /= undefined.
read_args(S) -> read_args(S) ->
[S#state.pid, oneof([offset(), big_offset()]), len()]. [S#state.pid, offset(), len()].
read_ok(S, Off, L) ->
case S#state.written of
[{0, 1024}] -> false;
W -> check_writes(read, W, Off, L)
end.
read_post(S, [_Pid, Off, L], Res) -> read_post(S, [_Pid, Off, L], Res) ->
Written = get_overlaps(Off, L, S#state.written, []), case read_ok(S, Off, L) of
Chopped = chop(Off, L, Written), true -> is_ok(Res);
Trimmed = get_overlaps(Off, L, S#state.trimmed, []), mostly_true -> is_ok(Res);
Eof = lists:max([Pos+Sz||{Pos,Sz}<-S#state.written]), false -> is_error(Res)
case Res of
{ok, {Written0, Trimmed0}} ->
Written1 = lists:map(fun({_, Pos, Chunk, _}) ->
{Pos, iolist_size(Chunk)}
end, Written0),
Trimmed1 = lists:map(fun({_, Pos, Sz}) -> {Pos, Sz} end, Trimmed0),
Chopped =:= Written1
andalso Trimmed =:= Trimmed1;
%% TODO: such response are ugly, rethink the SPEC
{error, not_written} when Eof < Off + L ->
true;
{error, not_written} when Chopped =:= [] andalso Trimmed =:= [] ->
true;
_Other ->
is_error(Res)
end. end.
read_next(S, _Res, _Args) -> S. read_next(S, _Res, _Args) -> S.
read(Pid, Offset, Length) -> read(Pid, Offset, Length) ->
machi_file_proxy:read(Pid, Offset, Length, [{needs_trimmed, true}]). machi_file_proxy:read(Pid, Offset, Length).
%% write %% write
@ -283,7 +210,6 @@ write_pre(S) ->
%% do not allow writes with empty data %% do not allow writes with empty data
write_pre(_S, [_Pid, _Extra, {<<>>, _Tag, _Csum}]) -> write_pre(_S, [_Pid, _Extra, {<<>>, _Tag, _Csum}]) ->
?assert(false),
false; false;
write_pre(_S, _Args) -> write_pre(_S, _Args) ->
true. true.
@ -292,18 +218,28 @@ write_args(S) ->
{Off, Len} = hd(S#state.planned_writes), {Off, Len} = hd(S#state.planned_writes),
[S#state.pid, Off, data_with_csum(Len)]. [S#state.pid, Off, data_with_csum(Len)].
write_post(S, [_Pid, Off, {Bin, _Tag, _Csum}] = _Args, Res) -> write_ok(_S, [_Pid, Off, _Data]) when Off < 1024 -> false;
write_ok(S, [_Pid, Off, {Bin, _Tag, _Csum}]) ->
Size = iolist_size(Bin), Size = iolist_size(Bin),
case {get_overlaps(Off, Size, S#state.written, []), %% Check writes checks if a byte range is *written*
get_overlaps(Off, Size, S#state.trimmed, [])} of %% So writes are ok IFF they are NOT written, so
{[], []} -> %% we want not check_writes/3 to be true.
%% No overlap neither with written ranges nor trimmed check_writes(write, S#state.written, Off, Size).
%% ranges; OK to write things.
eq(Res, ok); write_post(S, Args, Res) ->
{_, _} -> case write_ok(S, Args) of
%% overlap found in either or both at written or at %% false means this range has NOT been written before, so
%% trimmed ranges; can't write. %% it should succeed
is_error(Res) false -> eq(Res, ok);
%% mostly true means we've written this range before BUT
%% as a special case if we get a call to write the EXACT
%% same data that's already on the disk, we return "ok"
%% instead of {error, written}.
mostly_true -> probably_error(Res);
%% If we get true, then we've already written this section
%% or a portion of this range to disk and should return an
%% error.
true -> is_error(Res)
end. end.
write_next(S, Res, [_Pid, Offset, {Bin, _Tag, _Csum}]) -> write_next(S, Res, [_Pid, Offset, {Bin, _Tag, _Csum}]) ->
@ -344,40 +280,24 @@ append_next(S, Res, [_Pid, Extra, {Bin, _Tag, _Csum}]) ->
case is_ok(Res) of case is_ok(Res) of
true -> true ->
Offset = get_offset(Res), Offset = get_offset(Res),
S#state{prev_extra = Extra, true = offset_valid(Offset, S#state.prev_extra, S#state.written),
written = lists:sort(S#state.written ++ [{Offset, iolist_size(Bin)}])}; S#state{prev_extra = Extra, written = lists:sort(S#state.written ++ [{Offset, iolist_size(Bin)}])};
_Other -> _ ->
S S
end. end.
%% appends should always succeed unless the disk is full %% appends should always succeed unless the disk is full
%% or there's a hardware failure. %% or there's a hardware failure.
append_post(S, _Args, Res) -> append_post(_S, _Args, Res) ->
case is_ok(Res) of true == is_ok(Res).
true ->
Offset = get_offset(Res),
case erlang:max(last_byte(S#state.written),
last_byte(S#state.trimmed)) + S#state.prev_extra of
Offset ->
true;
UnexpectedByte ->
{wrong_offset_after_append,
{Offset, UnexpectedByte},
{S#state.written, S#state.prev_extra}}
end;
Error ->
Error
end.
%% rewrite %% rewrite
rewrite_pre(S) -> rewrite_pre(S) ->
S#state.pid /= undefined andalso S#state.pid /= undefined andalso S#state.written /= [].
(S#state.written ++ S#state.trimmed) /= [] .
rewrite_args(S) -> rewrite_args(S) ->
?LET({Off, Len}, ?LET({Off, Len}, get_written_interval(S#state.written),
get_written_interval(S#state.written ++ S#state.trimmed),
[S#state.pid, Off, data_with_csum(Len)]). [S#state.pid, Off, data_with_csum(Len)]).
rewrite(Pid, Offset, {Bin, Tag, Csum}) -> rewrite(Pid, Offset, {Bin, Tag, Csum}) ->
@ -391,88 +311,18 @@ rewrite_post(_S, _Args, Res) ->
rewrite_next(S, _Res, _Args) -> rewrite_next(S, _Res, _Args) ->
S#state{prev_extra = 0}. S#state{prev_extra = 0}.
%% trim
trim_pre(S) ->
S#state.pid /= undefined andalso S#state.planned_trims /= [].
trim_args(S) ->
{Offset, Length} = hd(S#state.planned_trims),
[S#state.pid, Offset, Length].
trim(Pid, Offset, Length) ->
machi_file_proxy:trim(Pid, Offset, Length, false).
trim_post(_S, [_Pid, _Offset, _Length], ok) ->
true;
trim_post(_S, [_Pid, _Offset, _Length], _Res) ->
false.
trim_next(S, Res, [_Pid, Offset, Length]) ->
S1 = case is_ok(Res) of
true ->
NewWritten = cleanup_chunk(Offset, Length, S#state.written),
Trimmed1 = cleanup_chunk(Offset, Length, S#state.trimmed),
NewTrimmed = lists:sort([{Offset, Length}|Trimmed1]),
S#state{trimmed=NewTrimmed,
written=NewWritten};
_Other ->
S
end,
S1#state{prev_extra=0,
planned_trims=tl(S#state.planned_trims)}.
stop_pre(S) ->
S#state.pid /= undefined.
stop_args(S) ->
[S#state.pid].
stop(Pid) ->
catch machi_file_proxy:stop(Pid).
stop_post(_, _, _) -> true.
stop_next(S, _, _) ->
S#state{pid=undefined, prev_extra=0}.
%% Property %% Property
prop_ok() -> prop_ok() ->
cleanup(), cleanup(),
?FORALL({I, T}, ?FORALL(I, shuffle_interval(),
{shuffle_interval(), shuffle_interval()}, ?FORALL(Cmds, parallel_commands(?MODULE, initial_state(I)),
?FORALL(Cmds, parallel_commands(?MODULE, initial_state(I, T)),
begin begin
{H, S, Res} = run_parallel_commands(?MODULE, Cmds), {H, S, Res} = run_parallel_commands(?MODULE, Cmds),
cleanup(),
pretty_commands(?MODULE, Cmds, {H, S, Res}, pretty_commands(?MODULE, Cmds, {H, S, Res},
aggregate(command_names(Cmds), Res == ok)) aggregate(command_names(Cmds), Res == ok))
end)). end)
).
%% Test for tester functions
chopper_test_() ->
[?_assertEqual([{0, 1024}],
get_overlaps(1, 1, [{0, 1024}], [])),
?_assertEqual([],
get_overlaps(10, 5, [{9, 1}, {15, 1}], [])),
?_assertEqual([{9,2},{14,1}],
get_overlaps(10, 5, [{9, 2}, {14, 1}], [])),
?_assertEqual([], chop(0, 0, [{0,2}])),
?_assertEqual([{0, 1}], chop(0, 1, [{0,2}])),
?_assertEqual([], chop(1, 0, [{0,2}])),
?_assertEqual([{1, 1}], chop(1, 1, [{0,2}])),
?_assertEqual([{1, 1}], chop(1, 2, [{0,2}])),
?_assertEqual([], chop(2, 1, [{0,2}])),
?_assertEqual([], chop(2, 2, [{0,2}])),
?_assertEqual([{1, 1}], chop(1, 3, [{0,2}])),
?_assertError(_, chop(3, 1, [{0,2}])),
?_assertEqual([], chop(2, 3, [{0,2}])),
?_assertEqual({0, 1}, chopped_left(1, [{0, 1024}])),
?_assertEqual([{0, 1}, {2, 1022}], cleanup_chunk(1, 1, [{0, 1024}])),
?_assertEqual([{2, 1022}], cleanup_chunk(0, 2, [{0, 1}, {2, 1022}])),
?_assert(true)
].
-endif. % EQC -endif. % EQC
-endif. % TEST -endif. % TEST

View file

@ -38,7 +38,7 @@ clean_up_data_dir(DataDir) ->
-ifndef(PULSE). -ifndef(PULSE).
-define(TESTDIR, "./t"). -define(TESTDIR, "./t").
-define(HYOOGE, 75 * 1024 * 1024). % 75 MBytes -define(HYOOGE, 1 * 1024 * 1024 * 1024). % 1 long GB
random_binary_single() -> random_binary_single() ->
%% OK, I guess it's not that random... %% OK, I guess it's not that random...
@ -76,67 +76,28 @@ random_binary(Start, End) ->
binary:part(random_binary_single(), Start, End) binary:part(random_binary_single(), Start, End)
end. end.
setup() ->
{ok, Pid} = machi_file_proxy:start_link(fluname, "test", ?TESTDIR),
Pid.
teardown(Pid) ->
catch machi_file_proxy:stop(Pid).
machi_file_proxy_test_() -> machi_file_proxy_test_() ->
clean_up_data_dir(?TESTDIR), clean_up_data_dir(?TESTDIR),
{setup, {ok, Pid} = machi_file_proxy:start_link("test", ?TESTDIR),
fun setup/0,
fun teardown/1,
fun(Pid) ->
[ [
?_assertEqual({error, bad_arg}, machi_file_proxy:read(Pid, -1, -1)), ?_assertEqual({error, bad_arg}, machi_file_proxy:read(Pid, -1, -1)),
?_assertEqual({error, bad_arg}, machi_file_proxy:write(Pid, -1, <<"yo">>)), ?_assertEqual({error, bad_arg}, machi_file_proxy:write(Pid, -1, <<"yo">>)),
?_assertEqual({error, bad_arg}, machi_file_proxy:append(Pid, [], -1, <<"krep">>)), ?_assertEqual({error, bad_arg}, machi_file_proxy:append(Pid, [], -1, <<"krep">>)),
?_assertMatch({ok, {_, []}}, machi_file_proxy:read(Pid, 1, 1)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1, 1)),
?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1024, 1)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1024, 1)),
?_assertMatch({ok, {_, []}}, machi_file_proxy:read(Pid, 1, 1024)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1, 1024)),
?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1024, ?HYOOGE)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, 1024, ?HYOOGE)),
?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, ?HYOOGE, 1)), ?_assertEqual({error, not_written}, machi_file_proxy:read(Pid, ?HYOOGE, 1)),
{timeout, 10, ?_assertEqual({error, written}, machi_file_proxy:write(Pid, 1, random_binary(0, ?HYOOGE))),
?_assertEqual({error, written}, machi_file_proxy:write(Pid, 1, random_binary(0, ?HYOOGE)))},
?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, random_binary(0, 1024))), ?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, random_binary(0, 1024))),
?_assertEqual({error, written}, machi_file_proxy:write(Pid, 1024, <<"fail">>)), ?_assertEqual({error, written}, machi_file_proxy:write(Pid, 1024, <<"fail">>)),
?_assertEqual({error, written}, machi_file_proxy:write(Pid, 1, <<"fail">>)), ?_assertEqual({error, written}, machi_file_proxy:write(Pid, 1, <<"fail">>)),
?_assertMatch({ok, {[{_, _, _, _}], []}}, machi_file_proxy:read(Pid, 1025, 1000)), ?_assertMatch({ok, _, _}, machi_file_proxy:read(Pid, 1025, 1000)),
?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, [], 1024, <<"mind the gap">>)), ?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, [], 1024, <<"mind the gap">>)),
?_assertEqual(ok, machi_file_proxy:write(Pid, 2060, [], random_binary(0, 1024))) ?_assertEqual(ok, machi_file_proxy:write(Pid, 2060, [], random_binary(0, 1024))),
] ?_assertException(exit, {normal, _}, machi_file_proxy:stop(Pid))
end}. ].
multiple_chunks_read_test_() ->
clean_up_data_dir(?TESTDIR),
{setup,
fun setup/0,
fun teardown/1,
fun(Pid) ->
[
?_assertEqual(ok, machi_file_proxy:trim(Pid, 0, 1, false)),
?_assertMatch({ok, {[], [{"test", 0, 1}]}},
machi_file_proxy:read(Pid, 0, 1,
#read_opts{needs_trimmed=true})),
?_assertMatch({ok, "test", _}, machi_file_proxy:append(Pid, random_binary(0, 1024))),
?_assertEqual(ok, machi_file_proxy:write(Pid, 10000, <<"fail">>)),
?_assertEqual(ok, machi_file_proxy:write(Pid, 20000, <<"fail">>)),
?_assertEqual(ok, machi_file_proxy:write(Pid, 30000, <<"fail">>)),
%% Freeza
?_assertEqual(ok, machi_file_proxy:write(Pid, 530000, <<"fail">>)),
?_assertMatch({ok, {[{"test", 1024, _, _},
{"test", 10000, <<"fail">>, _},
{"test", 20000, <<"fail">>, _},
{"test", 30000, <<"fail">>, _},
{"test", 530000, <<"fail">>, _}], []}},
machi_file_proxy:read(Pid, 1024, 530000)),
?_assertMatch({ok, {[{"test", 1, _, _}], [{"test", 0, 1}]}},
machi_file_proxy:read(Pid, 0, 1024,
#read_opts{needs_trimmed=true}))
]
end}.
-endif. % !PULSE -endif. % !PULSE
-endif. % TEST. -endif. % TEST.

View file

@ -30,22 +30,6 @@
-define(FLU, machi_flu1). -define(FLU, machi_flu1).
-define(FLU_C, machi_flu1_client). -define(FLU_C, machi_flu1_client).
get_env_vars(App, Ks) ->
Raw = [application:get_env(App, K) || K <- Ks],
Old = lists:zip(Ks, Raw),
{App, Old}.
clean_up_env_vars({App, Old}) ->
[case Res of
undefined ->
application:unset_env(App, K);
{ok, V} ->
application:set_env(App, K, V)
end || {K, Res} <- Old].
filter_env_var({ok, V}) -> V;
filter_env_var(Else) -> Else.
clean_up_data_dir(DataDir) -> clean_up_data_dir(DataDir) ->
[begin [begin
Fs = filelib:wildcard(DataDir ++ Glob), Fs = filelib:wildcard(DataDir ++ Glob),
@ -85,46 +69,47 @@ maybe_start_sup() ->
Pid -> Pid Pid -> Pid
end. end.
-ifndef(PULSE). -ifndef(PULSE).
flu_smoke_test() -> flu_smoke_test() ->
Host = "localhost", Host = "localhost",
TcpPort = 12957, TcpPort = 32957,
DataDir = "./data", DataDir = "./data",
NSInfo = undefined,
NoCSum = <<>>,
Prefix = <<"prefix!">>, Prefix = <<"prefix!">>,
BadPrefix = BadFile = "no/good", BadPrefix = BadFile = "no/good",
W_props = [{initial_wedged, false}], W_props = [{initial_wedged, false}],
{_, _, _} = machi_test_util:start_flu_package(smoke_flu, TcpPort, DataDir, W_props), start_flu_package(smoke_flu, TcpPort, DataDir, W_props),
try try
Msg = "Hello, world!", Msg = "Hello, world!",
Msg = ?FLU_C:echo(Host, TcpPort, Msg), Msg = ?FLU_C:echo(Host, TcpPort, Msg),
{error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort,"does-not-exist"), {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort,
{error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, BadFile), ?DUMMY_PV1_EPOCH,
"does-not-exist"),
{error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort,
?DUMMY_PV1_EPOCH, BadFile),
{ok, []} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), {ok, []} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH),
{ok, {false, _,_,_}} = ?FLU_C:wedge_status(Host, TcpPort), {ok, {false, _}} = ?FLU_C:wedge_status(Host, TcpPort),
Chunk1 = <<"yo!">>, Chunk1 = <<"yo!">>,
{ok, {Off1,Len1,File1}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, {ok, {Off1,Len1,File1}} = ?FLU_C:append_chunk(Host, TcpPort,
?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
Prefix, Chunk1, NoCSum), Prefix, Chunk1),
{ok, {[{_, Off1, Chunk1, _}], _}} = ?FLU_C:read_chunk(Host, TcpPort, {ok, Chunk1} = ?FLU_C:read_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH,
NSInfo, ?DUMMY_PV1_EPOCH, File1, Off1, Len1),
File1, Off1, Len1, {ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort,
noopt), ?DUMMY_PV1_EPOCH, File1),
{ok, KludgeBin} = ?FLU_C:checksum_list(Host, TcpPort, File1),
true = is_binary(KludgeBin), true = is_binary(KludgeBin),
{error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort,
?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
BadPrefix, Chunk1, NoCSum), BadPrefix, Chunk1),
{ok, [{_,File1}]} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH), {ok, [{_,File1}]} = ?FLU_C:list_files(Host, TcpPort, ?DUMMY_PV1_EPOCH),
Len1 = size(Chunk1), Len1 = size(Chunk1),
{error, not_written} = ?FLU_C:read_chunk(Host, TcpPort, {error, not_written} = ?FLU_C:read_chunk(Host, TcpPort,
NSInfo, ?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
File1, Off1*983829323, Len1, File1, Off1*983829323, Len1),
noopt),
%% XXX FIXME %% XXX FIXME
%% %%
%% This is failing because the read extends past the end of the file. %% This is failing because the read extends past the end of the file.
@ -133,22 +118,19 @@ flu_smoke_test() ->
%% of the read will cause it to fail. %% of the read will cause it to fail.
%% %%
%% {error, partial_read} = ?FLU_C:read_chunk(Host, TcpPort, %% {error, partial_read} = ?FLU_C:read_chunk(Host, TcpPort,
%% NSInfo, ?DUMMY_PV1_EPOCH, %% ?DUMMY_PV1_EPOCH,
%% File1, Off1, Len1*9999), %% File1, Off1, Len1*9999),
{ok, {Off1b,Len1b,File1b}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, {ok, {Off1b,Len1b,File1b}} = ?FLU_C:append_chunk(Host, TcpPort,
?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
Prefix, Chunk1,NoCSum), Prefix, Chunk1),
Extra = 42, Extra = 42,
Opts1 = #append_opts{chunk_extra=Extra}, {ok, {Off1c,Len1c,File1c}} = ?FLU_C:append_chunk_extra(Host, TcpPort,
{ok, {Off1c,Len1c,File1c}} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo,
?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
Prefix, Chunk1, NoCSum, Prefix, Chunk1, Extra),
Opts1, infinity),
{ok, {Off1d,Len1d,File1d}} = ?FLU_C:append_chunk(Host, TcpPort, {ok, {Off1d,Len1d,File1d}} = ?FLU_C:append_chunk(Host, TcpPort,
NSInfo,
?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
Prefix, Chunk1,NoCSum), Prefix, Chunk1),
if File1b == File1c, File1c == File1d -> if File1b == File1c, File1c == File1d ->
true = (Off1c == Off1b + Len1b), true = (Off1c == Off1b + Len1b),
true = (Off1d == Off1c + Len1c + Extra); true = (Off1d == Off1c + Len1c + Extra);
@ -156,44 +138,27 @@ flu_smoke_test() ->
exit(not_mandatory_but_test_expected_same_file_fixme) exit(not_mandatory_but_test_expected_same_file_fixme)
end, end,
Chunk1_cs = {<<?CSUM_TAG_NONE:8, 0:(8*20)>>, Chunk1},
{ok, {Off1e,Len1e,File1e}} = ?FLU_C:append_chunk(Host, TcpPort,
?DUMMY_PV1_EPOCH,
Prefix, Chunk1_cs),
Chunk2 = <<"yo yo">>, Chunk2 = <<"yo yo">>,
Len2 = byte_size(Chunk2), Len2 = byte_size(Chunk2),
Off2 = ?MINIMUM_OFFSET + 77, Off2 = ?MINIMUM_OFFSET + 77,
File2 = "smoke-whole-file^^0^1^1", File2 = "smoke-whole-file^1^1",
ok = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, ok = ?FLU_C:write_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH,
File2, Off2, Chunk2, NoCSum), File2, Off2, Chunk2),
{error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, {error, bad_arg} = ?FLU_C:write_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH,
BadFile, Off2, Chunk2, NoCSum), BadFile, Off2, Chunk2),
{ok, {[{_, Off2, Chunk2, _}], _}} = {ok, Chunk2} = ?FLU_C:read_chunk(Host, TcpPort, ?DUMMY_PV1_EPOCH,
?FLU_C:read_chunk(Host, TcpPort, NSInfo, ?DUMMY_PV1_EPOCH, File2, Off2, Len2, noopt), File2, Off2, Len2),
{error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort,
NSInfo, ?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
"no!!", Off2, Len2, noopt), "no!!", Off2, Len2),
{error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort,
NSInfo, ?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
BadFile, Off2, Len2, noopt), BadFile, Off2, Len2),
%% Make a connected socket.
Sock1 = ?FLU_C:connect(#p_srvr{address=Host, port=TcpPort}),
%% Let's test some cluster version enforcement.
Good_EpochNum = 0,
Good_NSVersion = 0,
Good_NS = <<>>,
{ok, {false, {Good_EpochNum,_}, Good_NSVersion, GoodNS}} =
?FLU_C:wedge_status(Sock1),
NS_good = #ns_info{version=Good_NSVersion, name=Good_NS},
{ok, {[{_, Off2, Chunk2, _}], _}} =
?FLU_C:read_chunk(Sock1, NS_good, ?DUMMY_PV1_EPOCH,
File2, Off2, Len2, noopt),
NS_bad_version = #ns_info{version=1, name=Good_NS},
NS_bad_name = #ns_info{version=Good_NSVersion, name= <<"foons">>},
{error, bad_epoch} =
?FLU_C:read_chunk(Sock1, NS_bad_version, ?DUMMY_PV1_EPOCH,
File2, Off2, Len2, noopt),
{error, bad_arg} =
?FLU_C:read_chunk(Sock1, NS_bad_name, ?DUMMY_PV1_EPOCH,
File2, Off2, Len2, noopt),
%% We know that File1 still exists. Pretend that we've done a %% We know that File1 still exists. Pretend that we've done a
%% migration and exercise the delete_migration() API. %% migration and exercise the delete_migration() API.
@ -210,23 +175,25 @@ flu_smoke_test() ->
{error, bad_arg} = ?FLU_C:trunc_hack(Host, TcpPort, {error, bad_arg} = ?FLU_C:trunc_hack(Host, TcpPort,
?DUMMY_PV1_EPOCH, BadFile), ?DUMMY_PV1_EPOCH, BadFile),
ok = ?FLU_C:quit(Sock1) ok = ?FLU_C:quit(?FLU_C:connect(#p_srvr{address=Host,
port=TcpPort}))
after after
machi_test_util:stop_flu_package() stop_flu_package(smoke_flu)
end. end.
flu_projection_smoke_test() -> flu_projection_smoke_test() ->
Host = "localhost", Host = "localhost",
TcpPort = 12959, TcpPort = 32959,
DataDir = "./data.projst", DataDir = "./data.projst",
{_,_,_} = machi_test_util:start_flu_package(projection_test_flu, TcpPort, DataDir),
start_flu_package(projection_test_flu, TcpPort, DataDir),
try try
[ok = flu_projection_common(Host, TcpPort, T) || [ok = flu_projection_common(Host, TcpPort, T) ||
T <- [public, private] ] T <- [public, private] ]
%% , {ok, {false, EpochID1,_,_}} = ?FLU_C:wedge_status(Host, TcpPort), %% , {ok, {false, EpochID1}} = ?FLU_C:wedge_status(Host, TcpPort),
%% io:format(user, "EpochID1 ~p\n", [EpochID1]) %% io:format(user, "EpochID1 ~p\n", [EpochID1])
after after
machi_test_util:stop_flu_package() stop_flu_package(projection_test_flu)
end. end.
flu_projection_common(Host, TcpPort, T) -> flu_projection_common(Host, TcpPort, T) ->
@ -254,32 +221,30 @@ flu_projection_common(Host, TcpPort, T) ->
bad_checksum_test() -> bad_checksum_test() ->
Host = "localhost", Host = "localhost",
TcpPort = 12960, TcpPort = 32960,
DataDir = "./data.bct", DataDir = "./data.bct",
Opts = [{initial_wedged, false}], Opts = [{initial_wedged, false}],
{_,_,_} = machi_test_util:start_flu_package(projection_test_flu, TcpPort, DataDir, Opts), start_flu_package(projection_test_flu, TcpPort, DataDir, Opts),
NSInfo = undefined,
try try
Prefix = <<"some prefix">>, Prefix = <<"some prefix">>,
Chunk1 = <<"yo yo yo">>, Chunk1 = <<"yo yo yo">>,
BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, ".................")}, Chunk1_badcs = {<<?CSUM_TAG_CLIENT_SHA:8, 0:(8*20)>>, Chunk1},
{error, bad_checksum} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, {error, bad_checksum} = ?FLU_C:append_chunk(Host, TcpPort,
?DUMMY_PV1_EPOCH, ?DUMMY_PV1_EPOCH,
Prefix, Prefix, Chunk1_badcs),
Chunk1, BadCSum),
ok ok
after after
machi_test_util:stop_flu_package() stop_flu_package(projection_test_flu)
end. end.
witness_test() -> witness_test() ->
Host = "localhost", Host = "localhost",
TcpPort = 12961, TcpPort = 32961,
DataDir = "./data.witness", DataDir = "./data.witness",
Opts = [{initial_wedged, false}, {witness_mode, true}], Opts = [{initial_wedged, false}, {witness_mode, true}],
{_,_,_} = machi_test_util:start_flu_package(projection_test_flu, TcpPort, DataDir, Opts), start_flu_package(projection_test_flu, TcpPort, DataDir, Opts),
NSInfo = undefined,
NoCSum = <<>>,
try try
Prefix = <<"some prefix">>, Prefix = <<"some prefix">>,
Chunk1 = <<"yo yo yo">>, Chunk1 = <<"yo yo yo">>,
@ -292,14 +257,15 @@ witness_test() ->
{ok, EpochID1} = ?FLU_C:get_latest_epochid(Host, TcpPort, private), {ok, EpochID1} = ?FLU_C:get_latest_epochid(Host, TcpPort, private),
%% Witness-protected ops all fail %% Witness-protected ops all fail
{error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, NSInfo, EpochID1, {error, bad_arg} = ?FLU_C:append_chunk(Host, TcpPort, EpochID1,
Prefix, Chunk1, NoCSum), Prefix, Chunk1),
File = <<"foofile">>, File = <<"foofile">>,
{error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, NSInfo, EpochID1, {error, bad_arg} = ?FLU_C:read_chunk(Host, TcpPort, EpochID1,
File, 9999, 9999, noopt), File, 9999, 9999),
{error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, File), {error, bad_arg} = ?FLU_C:checksum_list(Host, TcpPort, EpochID1,
File),
{error, bad_arg} = ?FLU_C:list_files(Host, TcpPort, EpochID1), {error, bad_arg} = ?FLU_C:list_files(Host, TcpPort, EpochID1),
{ok, {false, EpochID1,_,_}} = ?FLU_C:wedge_status(Host, TcpPort), {ok, {false, EpochID1}} = ?FLU_C:wedge_status(Host, TcpPort),
{ok, _} = ?FLU_C:get_latest_epochid(Host, TcpPort, public), {ok, _} = ?FLU_C:get_latest_epochid(Host, TcpPort, public),
{ok, _} = ?FLU_C:read_latest_projection(Host, TcpPort, public), {ok, _} = ?FLU_C:read_latest_projection(Host, TcpPort, public),
{error, not_written} = ?FLU_C:read_projection(Host, TcpPort, {error, not_written} = ?FLU_C:read_projection(Host, TcpPort,
@ -310,7 +276,7 @@ witness_test() ->
ok ok
after after
machi_test_util:stop_flu_package() stop_flu_package(projection_test_flu)
end. end.
%% The purpose of timing_pb_encoding_test_ and timing_bif_encoding_test_ is %% The purpose of timing_pb_encoding_test_ and timing_bif_encoding_test_ is

View file

@ -38,12 +38,12 @@ smoke_test_() ->
{timeout, 5*60, fun() -> smoke_test2() end}. {timeout, 5*60, fun() -> smoke_test2() end}.
smoke_test2() -> smoke_test2() ->
Ps = [{a,#p_srvr{name=a, address="localhost", port=5550, props="./data.a"}}, Ps = [{a,#p_srvr{name=a, address="localhost", port=5555, props="./data.a"}},
{b,#p_srvr{name=b, address="localhost", port=5551, props="./data.b"}}, {b,#p_srvr{name=b, address="localhost", port=5556, props="./data.b"}},
{c,#p_srvr{name=c, address="localhost", port=5552, props="./data.c"}} {c,#p_srvr{name=c, address="localhost", port=5557, props="./data.c"}}
], ],
[os:cmd("rm -rf " ++ P#p_srvr.props) || {_,P} <- Ps], [os:cmd("rm -rf " ++ P#p_srvr.props) || {_,P} <- Ps],
{ok, SupPid} = machi_sup:start_link(), {ok, SupPid} = machi_flu_sup:start_link(),
try try
%% Only run a, don't run b & c so we have 100% failures talking to them %% Only run a, don't run b & c so we have 100% failures talking to them
[begin [begin
@ -66,15 +66,15 @@ partial_stop_restart_test_() ->
{timeout, 5*60, fun() -> partial_stop_restart2() end}. {timeout, 5*60, fun() -> partial_stop_restart2() end}.
partial_stop_restart2() -> partial_stop_restart2() ->
Ps = [{a,#p_srvr{name=a, address="localhost", port=5560, props="./data.a"}}, Ps = [{a,#p_srvr{name=a, address="localhost", port=5555, props="./data.a"}},
{b,#p_srvr{name=b, address="localhost", port=5561, props="./data.b"}}, {b,#p_srvr{name=b, address="localhost", port=5556, props="./data.b"}},
{c,#p_srvr{name=c, address="localhost", port=5562, props="./data.c"}} {c,#p_srvr{name=c, address="localhost", port=5557, props="./data.c"}}
], ],
ChMgrs = [machi_flu_psup:make_mgr_supname(P#p_srvr.name) || {_,P} <-Ps], ChMgrs = [machi_flu_psup:make_mgr_supname(P#p_srvr.name) || {_,P} <-Ps],
PStores = [machi_flu_psup:make_proj_supname(P#p_srvr.name) || {_,P} <-Ps], PStores = [machi_flu_psup:make_proj_supname(P#p_srvr.name) || {_,P} <-Ps],
Dict = orddict:from_list(Ps), Dict = orddict:from_list(Ps),
[os:cmd("rm -rf " ++ P#p_srvr.props) || {_,P} <- Ps], [os:cmd("rm -rf " ++ P#p_srvr.props) || {_,P} <- Ps],
{ok, SupPid} = machi_sup:start_link(), {ok, SupPid} = machi_flu_sup:start_link(),
DbgProps = [{initial_wedged, true}], DbgProps = [{initial_wedged, true}],
Start = fun({_,P}) -> Start = fun({_,P}) ->
#p_srvr{name=Name, port=Port, props=Dir} = P, #p_srvr{name=Name, port=Port, props=Dir} = P,
@ -84,23 +84,20 @@ partial_stop_restart2() ->
WedgeStatus = fun({_,#p_srvr{address=Addr, port=TcpPort}}) -> WedgeStatus = fun({_,#p_srvr{address=Addr, port=TcpPort}}) ->
machi_flu1_client:wedge_status(Addr, TcpPort) machi_flu1_client:wedge_status(Addr, TcpPort)
end, end,
NSInfo = undefined,
Append = fun({_,#p_srvr{address=Addr, port=TcpPort}}, EpochID) -> Append = fun({_,#p_srvr{address=Addr, port=TcpPort}}, EpochID) ->
NoCSum = <<>>,
machi_flu1_client:append_chunk(Addr, TcpPort, machi_flu1_client:append_chunk(Addr, TcpPort,
NSInfo, EpochID, EpochID,
<<"prefix">>, <<"prefix">>, <<"data">>)
<<"data">>, NoCSum)
end, end,
try try
[Start(P) || P <- Ps], [Start(P) || P <- Ps],
[{ok, {true, _,_,_}} = WedgeStatus(P) || P <- Ps], % all are wedged [{ok, {true, _}} = WedgeStatus(P) || P <- Ps], % all are wedged
[{error,wedged} = Append(P, ?DUMMY_PV1_EPOCH) || P <- Ps], % all are wedged [{error,wedged} = Append(P, ?DUMMY_PV1_EPOCH) || P <- Ps], % all are wedged
[machi_chain_manager1:set_chain_members(ChMgr, Dict) || [machi_chain_manager1:set_chain_members(ChMgr, Dict) ||
ChMgr <- ChMgrs ], ChMgr <- ChMgrs ],
{ok, {false, EpochID1,_,_}} = WedgeStatus(hd(Ps)), {ok, {false, EpochID1}} = WedgeStatus(hd(Ps)),
[{ok, {false, EpochID1,_,_}} = WedgeStatus(P) || P <- Ps], % *not* wedged [{ok, {false, EpochID1}} = WedgeStatus(P) || P <- Ps], % *not* wedged
[{ok,_} = Append(P, EpochID1) || P <- Ps], % *not* wedged [{ok,_} = Append(P, EpochID1) || P <- Ps], % *not* wedged
{ok, {_,_,File1}} = Append(hd(Ps), EpochID1), {ok, {_,_,File1}} = Append(hd(Ps), EpochID1),
@ -126,9 +123,9 @@ partial_stop_restart2() ->
Epoch_m = Proj_m#projection_v1.epoch_number, Epoch_m = Proj_m#projection_v1.epoch_number,
%% Confirm that all FLUs are *not* wedged, with correct proj & epoch %% Confirm that all FLUs are *not* wedged, with correct proj & epoch
Proj_mCSum = Proj_m#projection_v1.epoch_csum, Proj_mCSum = Proj_m#projection_v1.epoch_csum,
[{ok, {false, {Epoch_m, Proj_mCSum},_,_}} = WedgeStatus(P) || % *not* wedged [{ok, {false, {Epoch_m, Proj_mCSum}}} = WedgeStatus(P) || % *not* wedged
P <- Ps], P <- Ps],
{ok, {false, EpochID1,_,_}} = WedgeStatus(hd(Ps)), {ok, {false, EpochID1}} = WedgeStatus(hd(Ps)),
[{ok,_} = Append(P, EpochID1) || P <- Ps], % *not* wedged [{ok,_} = Append(P, EpochID1) || P <- Ps], % *not* wedged
%% Stop all but 'a'. %% Stop all but 'a'.
@ -148,10 +145,10 @@ partial_stop_restart2() ->
{error, wedged} = Append(hd(Ps), EpochID1), {error, wedged} = Append(hd(Ps), EpochID1),
{_, #p_srvr{address=Addr_a, port=TcpPort_a}} = hd(Ps), {_, #p_srvr{address=Addr_a, port=TcpPort_a}} = hd(Ps),
{error, wedged} = machi_flu1_client:read_chunk( {error, wedged} = machi_flu1_client:read_chunk(
Addr_a, TcpPort_a, NSInfo, ?DUMMY_PV1_EPOCH, Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH,
<<>>, 99999999, 1, undefined), <<>>, 99999999, 1),
{error, bad_arg} = machi_flu1_client:checksum_list( {error, wedged} = machi_flu1_client:checksum_list(
Addr_a, TcpPort_a, <<>>), Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH, <<>>),
%% list_files() is permitted despite wedged status %% list_files() is permitted despite wedged status
{ok, _} = machi_flu1_client:list_files( {ok, _} = machi_flu1_client:list_files(
Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH), Addr_a, TcpPort_a, ?DUMMY_PV1_EPOCH),
@ -160,7 +157,7 @@ partial_stop_restart2() ->
{now_using,_,Epoch_n} = machi_chain_manager1:trigger_react_to_env( {now_using,_,Epoch_n} = machi_chain_manager1:trigger_react_to_env(
hd(ChMgrs)), hd(ChMgrs)),
true = (Epoch_n > Epoch_m), true = (Epoch_n > Epoch_m),
{ok, {false, EpochID3,_,_}} = WedgeStatus(hd(Ps)), {ok, {false, EpochID3}} = WedgeStatus(hd(Ps)),
%% The file we're assigned should be different with the epoch change. %% The file we're assigned should be different with the epoch change.
{ok, {_,_,File3}} = Append(hd(Ps), EpochID3), {ok, {_,_,File3}} = Append(hd(Ps), EpochID3),
true = (File1 /= File3), true = (File1 /= File3),
@ -176,19 +173,6 @@ partial_stop_restart2() ->
ok ok
end. end.
p_srvr_rec_test() ->
P = #p_srvr{name=a, address="localhost", port=1024, props=[yo]},
[P] = machi_flu_sup:sanitize_p_srvr_records([P]),
[P] = machi_flu_sup:sanitize_p_srvr_records([P,P]),
[] = machi_flu_sup:sanitize_p_srvr_records([nope]),
[] = machi_flu_sup:sanitize_p_srvr_records([#p_srvr{proto_mod=does_not_exist}]),
[] = machi_flu_sup:sanitize_p_srvr_records([#p_srvr{proto_mod="lists"}]),
[] = machi_flu_sup:sanitize_p_srvr_records([#p_srvr{address=7}]),
[] = machi_flu_sup:sanitize_p_srvr_records([#p_srvr{port=5}]),
[] = machi_flu_sup:sanitize_p_srvr_records([#p_srvr{port=foo}]),
[] = machi_flu_sup:sanitize_p_srvr_records([#p_srvr{props=foo}]),
ok.
-endif. % !PULSE -endif. % !PULSE
-endif. % TEST -endif. % TEST

View file

@ -1,307 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2014 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
-module(machi_lifecycle_mgr_test).
-compile(export_all).
-ifdef(TEST).
-ifndef(PULSE).
-include_lib("eunit/include/eunit.hrl").
-include("machi.hrl").
-include("machi_projection.hrl").
-define(MGR, machi_chain_manager1).
setup() ->
catch application:stop(machi),
{ok, SupPid} = machi_sup:start_link(),
error_logger:tty(false),
Dir = "./" ++ atom_to_list(?MODULE) ++ ".datadir",
machi_flu1_test:clean_up_data_dir(Dir ++ "/*/*"),
machi_flu1_test:clean_up_data_dir(Dir),
Envs = [{flu_data_dir, Dir ++ "/data/flu"},
{flu_config_dir, Dir ++ "/etc/flu-config"},
{chain_config_dir, Dir ++ "/etc/chain-config"},
{platform_data_dir, Dir ++ "/data"},
{platform_etc_dir, Dir ++ "/etc"},
{not_used_pending, Dir ++ "/etc/pending"}
],
EnvKeys = [K || {K,_V} <- Envs],
undefined = application:get_env(machi, yo),
Cleanup = machi_flu1_test:get_env_vars(machi, EnvKeys ++ [yo]),
[begin
filelib:ensure_dir(V ++ "/unused"),
application:set_env(machi, K, V)
end || {K, V} <- Envs],
{SupPid, Dir, Cleanup}.
cleanup({SupPid, Dir, Cleanup}) ->
exit(SupPid, normal),
machi_util:wait_for_death(SupPid, 100),
error_logger:tty(true),
catch application:stop(machi),
machi_flu1_test:clean_up_data_dir(Dir ++ "/*/*"),
machi_flu1_test:clean_up_data_dir(Dir),
machi_flu1_test:clean_up_env_vars(Cleanup),
undefined = application:get_env(machi, yo),
ok.
smoke_test_() ->
{timeout, 60, fun() -> smoke_test2() end}.
smoke_test2() ->
YoCleanup = setup(),
try
Prefix = <<"pre">>,
Chunk1 = <<"yochunk">>,
Host = "localhost",
PortBase = 60120,
Pa = #p_srvr{name=a,address="localhost",port=PortBase+0},
Pb = #p_srvr{name=b,address="localhost",port=PortBase+1},
Pc = #p_srvr{name=c,address="localhost",port=PortBase+2},
%% Pstore_a = machi_flu1:make_projection_server_regname(a),
%% Pstore_b = machi_flu1:make_projection_server_regname(b),
%% Pstore_c = machi_flu1:make_projection_server_regname(c),
Pstores = [Pstore_a, Pstore_b, Pstore_c] =
[machi_flu1:make_projection_server_regname(a),
machi_flu1:make_projection_server_regname(b),
machi_flu1:make_projection_server_regname(c)],
ChMgrs = [ChMgr_a, ChMgr_b, ChMgr_c] =
[machi_chain_manager1:make_chmgr_regname(a),
machi_chain_manager1:make_chmgr_regname(b),
machi_chain_manager1:make_chmgr_regname(c)],
Fits = [Fit_a, Fit_b, Fit_c] =
[machi_flu_psup:make_fitness_regname(a),
machi_flu_psup:make_fitness_regname(b),
machi_flu_psup:make_fitness_regname(c)],
Advance = machi_chain_manager1_test:make_advance_fun(
Fits, [a,b,c], ChMgrs, 3),
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("\nSTEP: Start 3 FLUs, no chain.\n", []),
[machi_lifecycle_mgr:make_pending_config(P) || P <- [Pa,Pb,Pc] ],
{[_,_,_],[]} = machi_lifecycle_mgr:process_pending(),
[{ok, #projection_v1{epoch_number=0}} =
machi_projection_store:read_latest_projection(PSTORE, private)
|| PSTORE <- Pstores],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("\nSTEP: Start chain = [a,b,c]\n", []),
C1 = #chain_def_v1{name=cx, mode=ap_mode, full=[Pa,Pb,Pc],
local_run=[a,b,c]},
machi_lifecycle_mgr:make_pending_config(C1),
{[],[_]} = machi_lifecycle_mgr:process_pending(),
Advance(),
[{ok, #projection_v1{all_members=[a,b,c]}} =
machi_projection_store:read_latest_projection(PSTORE, private)
|| PSTORE <- Pstores],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("\nSTEP: Reset chain = [b,c]\n", []),
C2 = #chain_def_v1{name=cx, mode=ap_mode, full=[Pb,Pc],
old_full=[a,b,c], old_witnesses=[],
local_stop=[a], local_run=[b,c]},
machi_lifecycle_mgr:make_pending_config(C2),
{[],[_]} = machi_lifecycle_mgr:process_pending(),
Advance(),
%% a should be down
{'EXIT', _} = (catch machi_projection_store:read_latest_projection(
hd(Pstores), private)),
[{ok, #projection_v1{all_members=[b,c]}} =
machi_projection_store:read_latest_projection(PSTORE, private)
|| PSTORE <- tl(Pstores)],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
io:format("\nSTEP: Reset chain = []\n", []),
C3 = #chain_def_v1{name=cx, mode=ap_mode, full=[],
old_full=[b,c], old_witnesses=[],
local_stop=[b,c], local_run=[]},
machi_lifecycle_mgr:make_pending_config(C3),
{[],[_]} = machi_lifecycle_mgr:process_pending(),
Advance(),
%% a,b,c should be down
[{'EXIT', _} = (catch machi_projection_store:read_latest_projection(
PSTORE, private))
|| PSTORE <- Pstores],
ok
after
cleanup(YoCleanup)
end.
ast_tuple_syntax_test() ->
T = fun(L) -> machi_lifecycle_mgr:check_ast_tuple_syntax(L) end,
Canon1 = [ {host, "localhost", []},
{host, "localhost", [{client_interface, "1.2.3.4"},
{admin_interface, "5.6.7.8"}]},
{flu, 'fx', "foohost", 4000, []},
switch_old_and_new,
{chain, 'cy', ['fx', 'fy'], [{foo,"yay"},{bar,baz}]} ],
{_Good,[]=_Bad} = T(Canon1),
Canon1_norm = machi_lifecycle_mgr:normalize_ast_tuple_syntax(Canon1),
true = (length(Canon1) == length(Canon1_norm)),
{Canon1_norm_b, []} = T(Canon1_norm),
true = (length(Canon1_norm) == length(Canon1_norm_b)),
{[],[_,_,_,_]} =
T([ {host, 'localhost', []},
{host, 'localhost', yo},
{host, "localhost", [{client_interface, 77.88293829832}]},
{host, "localhost", [{client_interface, "1.2.3.4"},
{bummer, "5.6.7.8"}]} ]),
{[],[_,_,_,_,_,_]} =
T([ {flu, 'fx', 'foohost', 4000, []},
{flu, 'fx', <<"foohost">>, 4000, []},
{flu, 'fx', "foohost", -4000, []},
{flu, 'fx', "foohost", 40009999, []},
{flu, 'fx', "foohost", 4000, gack},
{flu, 'fx', "foohost", 4000, [22]} ]),
{[],[_,_,_]} =
T([ {chain, 'cy', ["fx", "fy"], [foo,{bar,baz}]},
yoloyolo,
{chain, "cy", ["fx", 27], oops,arity,way,way,way,too,big,x}
]).
ast_run_test() ->
PortBase = 20300,
R1 = [
{host, "localhost", "localhost", "localhost", []},
{flu, 'f0', "localhost", PortBase+0, []},
{flu, 'f1', "localhost", PortBase+1, []},
{chain, 'ca', ['f0'], []},
{chain, 'cb', ['f1'], []},
switch_old_and_new,
{flu, 'f2', "localhost", PortBase+2, []},
{flu, 'f3', "localhost", PortBase+3, []},
{flu, 'f4', "localhost", PortBase+4, []},
{chain, 'ca', ['f0', 'f2'], []},
{chain, 'cc', ['f3', 'f4'], []}
],
{ok, Env1} = machi_lifecycle_mgr:run_ast(R1),
%% Uncomment to examine the Env trees.
%% Y1 = {lists:sort(gb_trees:to_list(element(1, Env1))),
%% lists:sort(gb_trees:to_list(element(2, Env1))),
%% element(3, Env1)},
%% io:format(user, "\nY1 ~p\n", [Y1]),
Negative_after_R1 =
[
{host, "localhost", "foo", "foo", []}, % dupe host
{flu, 'f1', "other", PortBase+9999999, []}, % bogus port # (syntax)
{flu, 'f1', "other", PortBase+888, []}, % dupe flu name
{flu, 'f7', "localhost", PortBase+1, []}, % dupe host+port
{chain, 'ca', ['f7'], []}, % unknown flu
{chain, 'cc', ['f0'], []}, % flu previously assigned
{chain, 'ca', cp_mode, ['f0', 'f1', 'f2'], [], []} % mode change
],
[begin
%% io:format(user, "dbg: Neg ~p\n", [Neg]),
{error, _} = machi_lifecycle_mgr:run_ast(R1 ++ [Neg])
end || Neg <- Negative_after_R1],
%% The 'run' phase doesn't blow smoke. What about 'diff'?
{X1a, X1b} = machi_lifecycle_mgr:diff_env(Env1, "localhost"),
%% There's only one host, "localhost", so 'all' should be exactly equal.
{X1a, X1b} = machi_lifecycle_mgr:diff_env(Env1, all),
%% io:format(user, "X1b: ~p\n", [X1b]),
%% Append to the R1 scenario: for chain cc: add f5, remove f4
%% Expect: see pattern matching below on X2b.
R2 = (R1 -- [switch_old_and_new]) ++
[switch_old_and_new,
{flu, 'f5', "localhost", PortBase+5, []},
{chain, 'cc', ['f3','f5'], []}],
{ok, Env2} = machi_lifecycle_mgr:run_ast(R2),
{_X2a, X2b} = machi_lifecycle_mgr:diff_env(Env2, "localhost"),
%% io:format(user, "X2b: ~p\n", [X2b]),
F5_port = PortBase+5,
[#p_srvr{name='f5',address="localhost",port=F5_port},
#chain_def_v1{name='cc',
full=[#p_srvr{name='f3'},#p_srvr{name='f5'}], witnesses=[],
old_full=[f3,f4], old_witnesses=[],
local_run=[f5], local_stop=[f4]}] = X2b,
ok.
ast_then_apply_test_() ->
{timeout, 60, fun() -> ast_then_apply_test2() end}.
ast_then_apply_test2() ->
YoCleanup = setup(),
try
PortBase = 20400,
NumChains = 4,
ChainLen = 3,
FLU_num = NumChains * ChainLen,
FLU_defs = [{flu, list_to_atom("f"++integer_to_list(X)),
"localhost", PortBase+X, []} || X <- lists:seq(1,FLU_num)],
FLU_names = [FLU || {flu,FLU,_,_,_} <- FLU_defs],
Ch_defs = [{chain, list_to_atom("c"++integer_to_list(X)),
lists:sublist(FLU_names, X, 3),
[]} || X <- lists:seq(1, FLU_num, 3)],
R1 = [switch_old_and_new,
{host, "localhost", "localhost", "localhost", []}]
++ FLU_defs ++ Ch_defs,
{ok, Env1} = machi_lifecycle_mgr:run_ast(R1),
{_X1a, X1b} = machi_lifecycle_mgr:diff_env(Env1, "localhost"),
%% io:format(user, "X1b ~p\n", [X1b]),
[machi_lifecycle_mgr:make_pending_config(X) || X <- X1b],
{PassFLUs, PassChains} = machi_lifecycle_mgr:process_pending(),
true = (length(PassFLUs) == length(FLU_defs)),
true = (length(PassChains) == length(Ch_defs)),
%% Kick the chain managers into doing something useful right now.
Pstores = [list_to_atom(atom_to_list(X) ++ "_pstore") || X <- FLU_names],
Fits = [list_to_atom(atom_to_list(X) ++ "_fitness") || X <- FLU_names],
ChMgrs = [list_to_atom(atom_to_list(X) ++ "_chmgr") || X <- FLU_names],
Advance = machi_chain_manager1_test:make_advance_fun(
Fits, FLU_names, ChMgrs, 3),
Advance(),
%% Sanity check: everyone is configured properly.
[begin
{ok, #projection_v1{epoch_number=Epoch, all_members=All,
chain_name=ChainName, upi=UPI}} =
machi_projection_store:read_latest_projection(PStore, private),
%% io:format(user, "~p: epoch ~p all ~p\n", [PStore, Epoch, All]),
true = Epoch > 0,
ChainLen = length(All),
true = (length(UPI) > 0),
{chain, _, Full, []} = lists:keyfind(ChainName, 2, Ch_defs),
true = lists:sort(Full) == lists:sort(All)
end || PStore <- Pstores],
ok
after
cleanup(YoCleanup)
end.
-endif. % !PULSE
-endif. % TEST

View file

@ -1,200 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
-module(machi_merkle_tree_test).
-compile([export_all]).
-include("machi_merkle_tree.hrl").
-include_lib("eunit/include/eunit.hrl").
-include_lib("kernel/include/file.hrl").
-define(GAP_CHANCE, 0.10).
%% unit tests
basic_test() ->
random:seed(os:timestamp()),
Fsz = choose_size() * 1024,
Filesize = max(Fsz, 10*1024*1024),
ChunkSize = max(1048576, Filesize div 100),
N = make_leaf_nodes(Filesize),
D0 = #naive{ leaves = N, chunk_size = ChunkSize, recalc = true },
T1 = machi_merkle_tree:build_tree(D0),
D1 = #naive{ leaves = tl(N), chunk_size = ChunkSize, recalc = true },
T2 = machi_merkle_tree:build_tree(D1),
?assertNotEqual(T1#naive.root, T2#naive.root),
?assertEqual(true, length(machi_merkle_tree:naive_diff(T1, T2)) == 1
orelse
Filesize > ChunkSize).
make_leaf_nodes(Filesize) ->
lists:reverse(
lists:foldl(fun(T, Acc) -> machi_merkle_tree:update_acc(T, Acc) end,
[],
generate_offsets(Filesize, 1024, []))
).
choose_int(Factor) ->
random:uniform(1024*Factor).
small_int() ->
choose_int(10).
medium_int() ->
choose_int(1024).
large_int() ->
choose_int(4096).
generate_offsets(Filesize, Current, Acc) when Current < Filesize ->
Length0 = choose_size(),
Length = case Length0 + Current > Filesize of
false -> Length0;
true -> Filesize - Current
end,
Data = term_to_binary(os:timestamp()),
Checksum = machi_util:make_tagged_csum(client_sha, machi_util:checksum_chunk(Data)),
Gap = maybe_gap(random:uniform()),
generate_offsets(Filesize, Current + Length + Gap, [ {Current, Length, Checksum} | Acc ]);
generate_offsets(_Filesize, _Current, Acc) ->
lists:reverse(Acc).
random_from_list(L) ->
N = random:uniform(length(L)),
lists:nth(N, L).
choose_size() ->
F = random_from_list([fun small_int/0, fun medium_int/0, fun large_int/0]),
F().
maybe_gap(Chance) when Chance < ?GAP_CHANCE ->
choose_size();
maybe_gap(_) -> 0.
%% Define or remove these ifdefs if benchmarking is desired.
-ifdef(BENCH).
generate_offsets(FH, Filesize, Current, Acc) when Current < Filesize ->
Length0 = choose_size(),
Length = case Length0 + Current > Filesize of
false -> Length0;
true -> Filesize - Current
end,
{ok, Data} = file:pread(FH, Current, Length),
Checksum = machi_util:make_tagged_csum(client_sha, machi_util:checksum_chunk(Data)),
Gap = maybe_gap(random:uniform()),
generate_offsets(FH, Filesize, Current + Length + Gap, [ {Current, Length, Checksum} | Acc ]);
generate_offsets(_FH, _Filesize, _Current, Acc) ->
lists:reverse(Acc).
make_offsets_from_file(Filename) ->
{ok, Info} = file:read_file_info(Filename),
Filesize = Info#file_info.size,
{ok, FH} = file:open(Filename, [read, raw, binary]),
Offsets = generate_offsets(FH, Filesize, 1024, []),
file:close(FH),
Offsets.
choose_filename() ->
random_from_list([
"def^c5ea7511-d649-47d6-a8c3-2b619379c237^1",
"jkl^b077eff7-b2be-4773-a73f-fea4acb8a732^1",
"stu^553fa47a-157c-4fac-b10f-2252c7d8c37a^1",
"vwx^ae015d68-7689-4c9f-9677-926c6664f513^1",
"yza^4c784dc2-19bf-4ac6-91f6-58bbe5aa88e0^1"
]).
make_csum_file(DataDir, Filename, Offsets) ->
Path = machi_util:make_checksum_filename(DataDir, Filename),
filelib:ensure_dir(Path),
{ok, MC} = machi_csum_table:open(Path, []),
lists:foreach(fun({Offset, Size, Checksum}) ->
machi_csum_table:write(MC, Offset, Size, Checksum) end,
Offsets),
machi_csum_table:close(MC).
test() ->
test(100).
test(N) ->
{ok, F} = file:open("results.txt", [raw, write]),
lists:foreach(fun(X) -> format_and_store(F, run_test(X)) end, lists:seq(1, N)).
format_and_store(F, {OffsetNum, {MTime, MSize}, {NTime, NSize}}) ->
S = io_lib:format("~w\t~w\t~w\t~w\t~w\n", [OffsetNum, MTime, MSize, NTime, NSize]),
ok = file:write(F, S).
run_test(C) ->
random:seed(os:timestamp()),
OffsetFn = "test/" ++ choose_filename(),
O = make_offsets_from_file(OffsetFn),
Fn = "csum_" ++ integer_to_list(C),
make_csum_file(".", Fn, O),
Osize = length(O),
{MTime, {ok, M}} = timer:tc(fun() -> machi_merkle_tree:open(Fn, ".", merklet) end),
{NTime, {ok, N}} = timer:tc(fun() -> machi_merkle_tree:open(Fn, ".", naive) end),
?assertEqual(Fn, machi_merkle_tree:filename(M)),
?assertEqual(Fn, machi_merkle_tree:filename(N)),
MTree = machi_merkle_tree:tree(M),
MSize = byte_size(term_to_binary(MTree)),
NTree = machi_merkle_tree:tree(N),
NSize = byte_size(term_to_binary(NTree)),
?assertEqual(same, machi_merkle_tree:diff(N, N)),
?assertEqual(same, machi_merkle_tree:diff(M, M)),
{Osize, {MTime, MSize}, {NTime, NSize}}.
torture_test(C) ->
Results = [ run_torture_test() || _ <- lists:seq(1, C) ],
{ok, F} = file:open("torture_results.txt", [raw, write]),
lists:foreach(fun({MSize, MTime, NSize, NTime}) ->
file:write(F, io_lib:format("~p\t~p\t~p\t~p\n",
[MSize, MTime, NSize, NTime]))
end, Results),
ok = file:close(F).
run_torture_test() ->
{NTime, N} = timer:tc(fun() -> naive_torture() end),
MSize = byte_size(term_to_binary(M)),
NSize = byte_size(term_to_binary(N)),
{MSize, MTime, NSize, NTime}.
naive_torture() ->
N = lists:foldl(fun(T, Acc) -> machi_merkle_tree:update_acc(T, Acc) end, [], torture_generator()),
T = #naive{ leaves = lists:reverse(N), chunk_size = 10010, recalc = true },
machi_merkle_tree:build_tree(T).
torture_generator() ->
[ {O, 1, crypto:hash(sha, term_to_binary(now()))} || O <- lists:seq(1024, 1000000) ].
-endif. % BENCH

View file

@ -24,7 +24,6 @@
-ifdef(TEST). -ifdef(TEST).
-ifndef(PULSE). -ifndef(PULSE).
-include("machi.hrl").
-include("machi_pb.hrl"). -include("machi_pb.hrl").
-include("machi_projection.hrl"). -include("machi_projection.hrl").
-include_lib("eunit/include/eunit.hrl"). -include_lib("eunit/include/eunit.hrl").
@ -35,15 +34,20 @@ smoke_test_() ->
{timeout, 5*60, fun() -> smoke_test2() end}. {timeout, 5*60, fun() -> smoke_test2() end}.
smoke_test2() -> smoke_test2() ->
PortBase = 5720, Port = 5720,
ok = application:set_env(machi, max_file_size, 1024*1024), Ps = [#p_srvr{name=a, address="localhost", port=Port, props="./data.a"}
try ],
{Ps, MgrNames, Dirs} = machi_test_util:start_flu_packages(
1, PortBase, "./data.", []),
D = orddict:from_list([{P#p_srvr.name, P} || P <- Ps]), D = orddict:from_list([{P#p_srvr.name, P} || P <- Ps]),
M0 = hd(MgrNames),
ok = machi_chain_manager1:set_chain_members(M0, D), [os:cmd("rm -rf " ++ P#p_srvr.props) || P <- Ps],
[machi_chain_manager1:trigger_react_to_env(M0) || _ <-lists:seq(1,5)], {ok, SupPid} = machi_flu_sup:start_link(),
try
[begin
#p_srvr{name=Name, port=Port, props=Dir} = P,
{ok, _} = machi_flu_psup:start_flu_package(Name, Port, Dir, [])
end || P <- Ps],
ok = machi_chain_manager1:set_chain_members(a_chmgr, D),
[machi_chain_manager1:trigger_react_to_env(a_chmgr) || _ <-lists:seq(1,5)],
{ok, Clnt} = ?C:start_link(Ps), {ok, Clnt} = ?C:start_link(Ps),
try try
@ -56,18 +60,16 @@ smoke_test2() ->
%% a separate test module? Or separate test func? %% a separate test module? Or separate test func?
{error, _} = ?C:auth(Clnt, "foo", "bar"), {error, _} = ?C:auth(Clnt, "foo", "bar"),
PK = <<>>,
Prefix = <<"prefix">>, Prefix = <<"prefix">>,
Chunk1 = <<"Hello, chunk!">>, Chunk1 = <<"Hello, chunk!">>,
NS = "",
NoCSum = <<>>,
Opts1 = #append_opts{},
{ok, {Off1, Size1, File1}} = {ok, {Off1, Size1, File1}} =
?C:append_chunk(Clnt, NS, Prefix, Chunk1, NoCSum, Opts1), ?C:append_chunk(Clnt, PK, Prefix, Chunk1, none, 0),
true = is_binary(File1), true = is_binary(File1),
Chunk2 = "It's another chunk", Chunk2 = "It's another chunk",
CSum2 = {client_sha, machi_util:checksum_chunk(Chunk2)}, CSum2 = {client_sha, machi_util:checksum_chunk(Chunk2)},
{ok, {Off2, Size2, File2}} = {ok, {Off2, Size2, File2}} =
?C:append_chunk(Clnt, NS, Prefix, Chunk2, CSum2, Opts1), ?C:append_chunk(Clnt, PK, Prefix, Chunk2, CSum2, 1024),
Chunk3 = ["This is a ", <<"test,">>, 32, [["Hello, world!"]]], Chunk3 = ["This is a ", <<"test,">>, 32, [["Hello, world!"]]],
File3 = File2, File3 = File2,
Off3 = Off2 + iolist_size(Chunk2), Off3 = Off2 + iolist_size(Chunk2),
@ -78,9 +80,7 @@ smoke_test2() ->
{iolist_to_binary(Chunk2), File2, Off2, Size2}, {iolist_to_binary(Chunk2), File2, Off2, Size2},
{iolist_to_binary(Chunk3), File3, Off3, Size3}], {iolist_to_binary(Chunk3), File3, Off3, Size3}],
[begin [begin
File = Fl, {ok, Ch} = ?C:read_chunk(Clnt, Fl, Off, Sz)
?assertMatch({ok, {[{File, Off, Ch, _}], []}},
?C:read_chunk(Clnt, Fl, Off, Sz, undefined))
end || {Ch, Fl, Off, Sz} <- Reads], end || {Ch, Fl, Off, Sz} <- Reads],
{ok, KludgeBin} = ?C:checksum_list(Clnt, File1), {ok, KludgeBin} = ?C:checksum_list(Clnt, File1),
@ -88,56 +88,25 @@ smoke_test2() ->
{ok, [{File1Size,File1}]} = ?C:list_files(Clnt), {ok, [{File1Size,File1}]} = ?C:list_files(Clnt),
true = is_integer(File1Size), true = is_integer(File1Size),
File1Bin = binary_to_list(File1),
[begin [begin
#p_srvr{name=Name, props=Props} = P, %% ok = ?C:trim_chunk(Clnt, Fl, Off, Sz)
Dir = proplists:get_value(data_dir, Props), %% This gets an error as trim API is still a stub
?assertEqual({ok, [File1Bin]}, ?assertMatch({bummer,
file:list_dir(filename:join([Dir, "data"]))), {throw,
FileListFileName = filename:join([Dir, "known_files_" ++ atom_to_list(Name)]), {error, bad_joss_taipan_fixme},
{ok, Plist} = machi_plist:open(FileListFileName, []), _Boring_stack_trace}},
?assertEqual([], machi_plist:all(Plist)) ?C:trim_chunk(Clnt, Fl, Off, Sz))
end || P <- Ps], end || {Ch, Fl, Off, Sz} <- Reads],
[begin
ok = ?C:trim_chunk(Clnt, Fl, Off, Sz)
end || {_Ch, Fl, Off, Sz} <- Reads],
[begin
{ok, {[], Trimmed}} =
?C:read_chunk(Clnt, Fl, Off, Sz, #read_opts{needs_trimmed=true}),
Filename = Fl,
?assertEqual([{Filename, Off, Sz}], Trimmed)
end || {_Ch, Fl, Off, Sz} <- Reads],
LargeBytes = binary:copy(<<"x">>, 1024*1024),
LBCsum = {client_sha, machi_util:checksum_chunk(LargeBytes)},
{ok, {Offx, Sizex, Filex}} =
?C:append_chunk(Clnt, NS,
Prefix, LargeBytes, LBCsum, Opts1),
ok = ?C:trim_chunk(Clnt, Filex, Offx, Sizex),
%% Make sure everything was trimmed
File = binary_to_list(Filex),
[begin
#p_srvr{name=Name, props=Props} = P,
Dir = proplists:get_value(data_dir, Props),
?assertEqual({ok, []},
file:list_dir(filename:join([Dir, "data"]))),
FileListFileName = filename:join([Dir, "known_files_" ++ atom_to_list(Name)]),
{ok, Plist} = machi_plist:open(FileListFileName, []),
?assertEqual([File], machi_plist:all(Plist))
end || P <- Ps],
[begin
{error, trimmed} =
?C:read_chunk(Clnt, Fl, Off, Sz, undefined)
end || {_Ch, Fl, Off, Sz} <- Reads],
ok ok
after after
(catch ?C:quit(Clnt)) (catch ?C:quit(Clnt))
end end
after after
machi_test_util:stop_flu_packages() exit(SupPid, normal),
[os:cmd("rm -rf " ++ P#p_srvr.props) || P <- Ps],
machi_util:wait_for_death(SupPid, 100),
ok
end. end.
-endif. % !PULSE -endif. % !PULSE

View file

@ -1,17 +0,0 @@
-module(machi_plist_test).
-include_lib("eunit/include/eunit.hrl").
open_close_test() ->
FileName = "bark-bark-one",
file:delete(FileName),
{ok, PList0} = machi_plist:open(FileName, []),
{ok, PList1} = machi_plist:add(PList0, "boomar"),
?assertEqual(["boomar"], machi_plist:all(PList1)),
ok = machi_plist:close(PList1),
{ok, PList2} = machi_plist:open(FileName, []),
?assertEqual(["boomar"], machi_plist:all(PList2)),
ok = machi_plist:close(PList2),
file:delete(FileName),
ok.

View file

@ -33,7 +33,7 @@ smoke_test() ->
Dir = "./data.a", Dir = "./data.a",
Os = [{ignore_stability_time, true}, {active_mode, false}], Os = [{ignore_stability_time, true}, {active_mode, false}],
os:cmd("rm -rf " ++ Dir), os:cmd("rm -rf " ++ Dir),
machi_test_util:start_flu_package(a, PortBase, "./data.a", Os), machi_flu1_test:start_flu_package(a, PortBase, "./data.a", Os),
try try
P1 = machi_projection:new(1, a, [], [], [], [], []), P1 = machi_projection:new(1, a, [], [], [], [], []),
@ -58,7 +58,7 @@ smoke_test() ->
ok ok
after after
machi_test_util:stop_flu_package() machi_flu1_test:stop_flu_package(a)
end. end.
-endif. % !PULSE -endif. % !PULSE

View file

@ -32,55 +32,46 @@
api_smoke_test() -> api_smoke_test() ->
RegName = api_smoke_flu, RegName = api_smoke_flu,
TcpPort = 17124, Host = "localhost",
TcpPort = 57124,
DataDir = "./data.api_smoke_flu", DataDir = "./data.api_smoke_flu",
W_props = [{active_mode, false},{initial_wedged, false}], W_props = [{active_mode, false},{initial_wedged, false}],
Prefix = <<"prefix">>, Prefix = <<"prefix">>,
NSInfo = undefined,
NoCSum = <<>>, machi_flu1_test:start_flu_package(RegName, TcpPort, DataDir, W_props),
try try
{[I], _, _} = machi_test_util:start_flu_package( I = #p_srvr{name=RegName, address=Host, port=TcpPort},
RegName, TcpPort, DataDir, W_props),
{ok, Prox1} = ?MUT:start_link(I), {ok, Prox1} = ?MUT:start_link(I),
try try
FakeEpoch = ?DUMMY_PV1_EPOCH, FakeEpoch = ?DUMMY_PV1_EPOCH,
[{ok, {_,_,_}} = ?MUT:append_chunk( [{ok, {_,_,_}} = ?MUT:append_chunk(Prox1,
Prox1, NSInfo, FakeEpoch, FakeEpoch, Prefix, <<"data">>,
Prefix, <<"data">>, NoCSum) || infinity) || _ <- lists:seq(1,5)],
_ <- lists:seq(1,5)],
%% Stop the FLU, what happens? %% Stop the FLU, what happens?
machi_test_util:stop_flu_package(), machi_flu1_test:stop_flu_package(RegName),
[{error,partition} = ?MUT:append_chunk(Prox1, NSInfo, [{error,partition} = ?MUT:append_chunk(Prox1,
FakeEpoch, Prefix, <<"data-stopped1">>, FakeEpoch, Prefix, <<"data-stopped1">>,
NoCSum) || _ <- lists:seq(1,3)], infinity) || _ <- lists:seq(1,3)],
%% Start the FLU again, we should be able to do stuff immediately %% Start the FLU again, we should be able to do stuff immediately
machi_test_util:start_flu_package(RegName, TcpPort, DataDir, machi_flu1_test:start_flu_package(RegName, TcpPort, DataDir,
[no_cleanup|W_props]), [save_data_dir|W_props]),
MyChunk = <<"my chunk data">>, MyChunk = <<"my chunk data">>,
{ok, {MyOff,MySize,MyFile}} = {ok, {MyOff,MySize,MyFile}} =
?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix, MyChunk, ?MUT:append_chunk(Prox1, FakeEpoch, Prefix, MyChunk,
NoCSum),
{ok, {[{_, MyOff, MyChunk, _MyChunkCSUM}], []}} =
?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile, MyOff, MySize, undefined),
MyChunk2_parts = [<<"my chunk ">>, "data", <<", yeah, again">>],
MyChunk2 = iolist_to_binary(MyChunk2_parts),
Opts1 = #append_opts{chunk_extra=4242},
{ok, {MyOff2,MySize2,MyFile2}} =
?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, Prefix,
MyChunk2_parts, NoCSum, Opts1, infinity),
[{ok, {[{_, MyOff2, MyChunk2, _}], []}} =
?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, MyFile2, MyOff2, MySize2, DefaultOptions) ||
DefaultOptions <- [undefined, noopt, none, any_atom_at_all] ],
BadCSum = {?CSUM_TAG_CLIENT_SHA, crypto:hash(sha, "...................")},
{error, bad_checksum} = ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch,
Prefix, MyChunk, BadCSum),
{error, bad_checksum} = ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch,
MyFile2,
MyOff2 + size(MyChunk2),
MyChunk, BadCSum,
infinity), infinity),
{ok, MyChunk} = ?MUT:read_chunk(Prox1, FakeEpoch, MyFile, MyOff, MySize),
MyChunk2 = <<"my chunk data, yeah, again">>,
{ok, {MyOff2,MySize2,MyFile2}} =
?MUT:append_chunk_extra(Prox1, FakeEpoch, Prefix,
MyChunk2, 4242, infinity),
{ok, MyChunk2} = ?MUT:read_chunk(Prox1, FakeEpoch, MyFile2, MyOff2, MySize2),
MyChunk_badcs = {<<?CSUM_TAG_CLIENT_SHA:8, 0:(8*20)>>, MyChunk},
{error, bad_checksum} = ?MUT:append_chunk(Prox1, FakeEpoch,
Prefix, MyChunk_badcs),
{error, bad_checksum} = ?MUT:write_chunk(Prox1, FakeEpoch,
<<"foo-file^1^1">>, 99832,
MyChunk_badcs),
%% Put kick_projection_reaction() in the middle of the test so %% Put kick_projection_reaction() in the middle of the test so
%% that any problems with its async nature will (hopefully) %% that any problems with its async nature will (hopefully)
@ -89,9 +80,9 @@ api_smoke_test() ->
%% Alright, now for the rest of the API, whee %% Alright, now for the rest of the API, whee
BadFile = <<"no-such-file">>, BadFile = <<"no-such-file">>,
{error, bad_arg} = ?MUT:checksum_list(Prox1, BadFile), {error, bad_arg} = ?MUT:checksum_list(Prox1, FakeEpoch, BadFile),
{ok, [_|_]} = ?MUT:list_files(Prox1, FakeEpoch), {ok, [_|_]} = ?MUT:list_files(Prox1, FakeEpoch),
{ok, {false, _,_,_}} = ?MUT:wedge_status(Prox1), {ok, {false, _}} = ?MUT:wedge_status(Prox1),
{ok, {0, _SomeCSum}} = ?MUT:get_latest_epochid(Prox1, public), {ok, {0, _SomeCSum}} = ?MUT:get_latest_epochid(Prox1, public),
{ok, #projection_v1{epoch_number=0}} = {ok, #projection_v1{epoch_number=0}} =
?MUT:read_latest_projection(Prox1, public), ?MUT:read_latest_projection(Prox1, public),
@ -109,30 +100,27 @@ api_smoke_test() ->
_ = (catch ?MUT:quit(Prox1)) _ = (catch ?MUT:quit(Prox1))
end end
after after
(catch machi_test_util:stop_flu_package()) (catch machi_flu1_test:stop_flu_package(RegName))
end. end.
flu_restart_test_() -> flu_restart_test() ->
{timeout, 1*60, fun() -> flu_restart_test2() end}.
flu_restart_test2() ->
RegName = a, RegName = a,
TcpPort = 17125, Host = "localhost",
TcpPort = 57125,
DataDir = "./data.api_smoke_flu2", DataDir = "./data.api_smoke_flu2",
W_props = [{initial_wedged, false}, {active_mode, false}], W_props = [{initial_wedged, false}, {active_mode, false}],
NSInfo = undefined, machi_flu1_test:start_flu_package(RegName, TcpPort, DataDir, W_props),
NoCSum = <<>>,
try try
{[I], _, _} = machi_test_util:start_flu_package( I = #p_srvr{name=RegName, address=Host, port=TcpPort},
RegName, TcpPort, DataDir, W_props),
{ok, Prox1} = ?MUT:start_link(I), {ok, Prox1} = ?MUT:start_link(I),
try try
FakeEpoch = ?DUMMY_PV1_EPOCH, FakeEpoch = ?DUMMY_PV1_EPOCH,
Data = <<"data!">>, Data = <<"data!">>,
Dataxx = <<"Fake!">>, Dataxx = <<"Fake!">>,
{ok, {Off1,Size1,File1}} = ?MUT:append_chunk(Prox1, NSInfo, {ok, {Off1,Size1,File1}} = ?MUT:append_chunk(Prox1,
FakeEpoch, <<"prefix">>, Data, NoCSum), FakeEpoch, <<"prefix">>, Data,
infinity),
P_a = #p_srvr{name=a, address="localhost", port=6622}, P_a = #p_srvr{name=a, address="localhost", port=6622},
P1 = machi_projection:new(1, RegName, [P_a], [], [RegName], [], []), P1 = machi_projection:new(1, RegName, [P_a], [], [RegName], [], []),
P1xx = P1#projection_v1{dbg2=["dbg2 changes are ok"]}, P1xx = P1#projection_v1{dbg2=["dbg2 changes are ok"]},
@ -144,7 +132,7 @@ flu_restart_test2() ->
{ok, EpochID} = ?MUT:get_epoch_id(Prox1), {ok, EpochID} = ?MUT:get_epoch_id(Prox1),
{ok, EpochID} = ?MUT:get_latest_epochid(Prox1, public), {ok, EpochID} = ?MUT:get_latest_epochid(Prox1, public),
{ok, EpochID} = ?MUT:get_latest_epochid(Prox1, private), {ok, EpochID} = ?MUT:get_latest_epochid(Prox1, private),
ok = machi_test_util:stop_flu_package(), timer:sleep(50), ok = machi_flu1_test:stop_flu_package(RegName), timer:sleep(50),
%% Now that the last proxy op was successful and only %% Now that the last proxy op was successful and only
%% after did we stop the FLU, let's check that both the %% after did we stop the FLU, let's check that both the
@ -156,10 +144,9 @@ flu_restart_test2() ->
%% makes the code a bit convoluted. (No LFE or %% makes the code a bit convoluted. (No LFE or
%% Elixir macros here, alas, they'd be useful.) %% Elixir macros here, alas, they'd be useful.)
AppendOpts1 = #append_opts{chunk_extra=42},
ExpectedOps = ExpectedOps =
[ [
fun(run) -> ?assertEqual({ok, EpochID}, ?MUT:get_epoch_id(Prox1)), fun(run) -> {ok, EpochID} = ?MUT:get_epoch_id(Prox1),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:get_epoch_id(Prox1) end, (stop) -> ?MUT:get_epoch_id(Prox1) end,
@ -238,37 +225,35 @@ flu_restart_test2() ->
(stop) -> ?MUT:get_all_projections(Prox1, private) (stop) -> ?MUT:get_all_projections(Prox1, private)
end, end,
fun(run) -> {ok, {_,_,_}} = fun(run) -> {ok, {_,_,_}} =
?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, ?MUT:append_chunk(Prox1, FakeEpoch,
<<"prefix">>, Data, NoCSum), <<"prefix">>, Data, infinity),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, (stop) -> ?MUT:append_chunk(Prox1, FakeEpoch,
<<"prefix">>, Data, NoCSum) <<"prefix">>, Data, infinity)
end, end,
fun(run) -> {ok, {_,_,_}} = fun(run) -> {ok, {_,_,_}} =
?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, ?MUT:append_chunk_extra(Prox1, FakeEpoch,
<<"prefix">>, Data, NoCSum, <<"prefix">>, Data, 42, infinity),
AppendOpts1, infinity),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:append_chunk(Prox1, NSInfo, FakeEpoch, (stop) -> ?MUT:append_chunk_extra(Prox1, FakeEpoch,
<<"prefix">>, Data, NoCSum, <<"prefix">>, Data, 42, infinity)
AppendOpts1, infinity)
end, end,
fun(run) -> {ok, {[{_, Off1, Data, _}], []}} = fun(run) -> {ok, Data} =
?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, ?MUT:read_chunk(Prox1, FakeEpoch,
File1, Off1, Size1, undefined), File1, Off1, Size1),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:read_chunk(Prox1, NSInfo, FakeEpoch, (stop) -> ?MUT:read_chunk(Prox1, FakeEpoch,
File1, Off1, Size1, undefined) File1, Off1, Size1)
end, end,
fun(run) -> {ok, KludgeBin} = fun(run) -> {ok, KludgeBin} =
?MUT:checksum_list(Prox1, File1), ?MUT:checksum_list(Prox1, FakeEpoch, File1),
true = is_binary(KludgeBin), true = is_binary(KludgeBin),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:checksum_list(Prox1, File1) (stop) -> ?MUT:checksum_list(Prox1, FakeEpoch, File1)
end, end,
fun(run) -> {ok, _} = fun(run) -> {ok, _} =
?MUT:list_files(Prox1, FakeEpoch), ?MUT:list_files(Prox1, FakeEpoch),
@ -284,32 +269,32 @@ flu_restart_test2() ->
end, end,
fun(run) -> fun(run) ->
ok = ok =
?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1,
Data, NoCSum, infinity), Data, infinity),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, (stop) -> ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1,
Data, NoCSum, infinity) Data, infinity)
end, end,
fun(run) -> fun(run) ->
{error, written} = {error, written} =
?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1,
Dataxx, NoCSum, infinity), Dataxx, infinity),
ok; ok;
(line) -> io:format("line ~p, ", [?LINE]); (line) -> io:format("line ~p, ", [?LINE]);
(stop) -> ?MUT:write_chunk(Prox1, NSInfo, FakeEpoch, File1, Off1, (stop) -> ?MUT:write_chunk(Prox1, FakeEpoch, File1, Off1,
Dataxx, NoCSum, infinity) Dataxx, infinity)
end end
], ],
[begin [begin
machi_test_util:start_flu_package( machi_flu1_test:start_flu_package(
RegName, TcpPort, DataDir, RegName, TcpPort, DataDir,
[no_cleanup|W_props]), [save_data_dir|W_props]),
_ = Fun(line), _ = Fun(line),
ok = Fun(run), ok = Fun(run),
ok = Fun(run), ok = Fun(run),
ok = machi_test_util:stop_flu_package(), ok = machi_flu1_test:stop_flu_package(RegName),
{error, partition} = Fun(stop), {error, partition} = Fun(stop),
{error, partition} = Fun(stop), {error, partition} = Fun(stop),
ok ok
@ -319,7 +304,7 @@ flu_restart_test2() ->
_ = (catch ?MUT:quit(Prox1)) _ = (catch ?MUT:quit(Prox1))
end end
after after
(catch machi_test_util:stop_flu_package()) (catch machi_flu1_test:stop_flu_package(RegName))
end. end.
-endif. % !PULSE -endif. % !PULSE

View file

@ -1,111 +0,0 @@
%% -------------------------------------------------------------------
%%
%% Copyright (c) 2007-2015 Basho Technologies, Inc. All Rights Reserved.
%%
%% This file is provided to you under the Apache License,
%% Version 2.0 (the "License"); you may not use this file
%% except in compliance with the License. You may obtain
%% a copy of the License at
%%
%% http://www.apache.org/licenses/LICENSE-2.0
%%
%% Unless required by applicable law or agreed to in writing,
%% software distributed under the License is distributed on an
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
%% KIND, either express or implied. See the License for the
%% specific language governing permissions and limitations
%% under the License.
%%
%% -------------------------------------------------------------------
-module(machi_test_util).
-compile(export_all).
-ifdef(TEST).
-ifndef(PULSE).
-include_lib("eunit/include/eunit.hrl").
-include("machi.hrl").
-include("machi_projection.hrl").
-define(FLU, machi_flu1).
-define(FLU_C, machi_flu1_client).
-spec start_flu_package(atom(), inet:port_number(), string()) ->
{Ps::[#p_srvr{}], MgrNames::[atom()], Dirs::[string()]}.
start_flu_package(FluName, TcpPort, DataDir) ->
start_flu_package(FluName, TcpPort, DataDir, []).
-spec start_flu_package(atom(), inet:port_number(), string(), list()) ->
{Ps::[#p_srvr{}], MgrNames::[atom()], Dirs::[string()]}.
start_flu_package(FluName, TcpPort, DataDir, Props) ->
MgrName = machi_flu_psup:make_mgr_supname(FluName),
FluInfo = [{#p_srvr{name=FluName, address="localhost", port=TcpPort,
props=[{chmgr, MgrName}, {data_dir, DataDir} | Props]},
DataDir, MgrName}],
start_flu_packages(FluInfo).
-spec start_flu_packages(pos_integer(), inet:port_number(), string(), list()) ->
{Ps::[#p_srvr{}], MgrNames::[atom()], Dirs::[string()]}.
start_flu_packages(FluCount, BaseTcpPort, DirPrefix, Props) ->
FluInfo = flu_info(FluCount, BaseTcpPort, DirPrefix, Props),
start_flu_packages(FluInfo).
start_flu_packages(FluInfo) ->
_ = stop_machi_sup(),
clean_up(FluInfo),
{ok, _SupPid} = machi_sup:start_link(),
[{ok, _} = machi_flu_psup:start_flu_package(Name, Port, Dir, Props) ||
{#p_srvr{name=Name, port=Port, props=Props}, Dir, _} <- FluInfo],
{Ps, Dirs, MgrNames} = lists:unzip3(FluInfo),
{Ps, MgrNames, Dirs}.
stop_flu_package() ->
stop_flu_packages().
stop_flu_packages() ->
stop_machi_sup().
flu_info(FluCount, BaseTcpPort, DirPrefix, Props) ->
[begin
FLUNameStr = [$a + I - 1],
FLUName = list_to_atom(FLUNameStr),
MgrName = machi_flu_psup:make_mgr_supname(FLUName),
DataDir = DirPrefix ++ "/data.eqc." ++ FLUNameStr,
{#p_srvr{name=FLUName, address="localhost", port=BaseTcpPort + I,
props=[{chmgr, MgrName}, {data_dir, DataDir} | Props]},
DataDir, MgrName}
end || I <- lists:seq(1, FluCount)].
stop_machi_sup() ->
case whereis(machi_sup) of
undefined -> ok;
Pid ->
catch exit(whereis(machi_sup), normal),
machi_util:wait_for_death(Pid, 100)
end.
clean_up(FluInfo) ->
_ = [begin
case proplists:get_value(no_cleanup, Props) of
true -> ok;
_ ->
_ = machi_flu1:stop(FLUName),
clean_up_dir(Dir)
end
end || {#p_srvr{name=FLUName, props=Props}, Dir, _} <- FluInfo],
ok.
clean_up_dir(Dir) ->
[begin
Fs = filelib:wildcard(Dir ++ Glob),
[file:delete(F) || F <- Fs],
[file:del_dir(F) || F <- Fs]
end || Glob <- ["*/*/*/*", "*/*/*", "*/*", "*"] ],
_ = file:del_dir(Dir),
ok.
-endif. % !PULSE
-endif. % TEST

176
tools.mk
View file

@ -1,176 +0,0 @@
# -------------------------------------------------------------------
#
# Copyright (c) 2014 Basho Technologies, Inc.
#
# This file is provided to you under the Apache License,
# Version 2.0 (the "License"); you may not use this file
# except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# -------------------------------------------------------------------
# -------------------------------------------------------------------
# NOTE: This file is is from https://github.com/basho/tools.mk.
# It should not be edited in a project. It should simply be updated
# wholesale when a new version of tools.mk is released.
# -------------------------------------------------------------------
REBAR ?= ./rebar
REVISION ?= $(shell git rev-parse --short HEAD)
PROJECT ?= $(shell basename `find src -name "*.app.src"` .app.src)
EUNIT_OPTS ?=
.PHONY: compile-no-deps test docs xref dialyzer-run dialyzer-quick dialyzer \
cleanplt upload-docs
compile-no-deps:
${REBAR} compile skip_deps=true
test: compile
${REBAR} ${EUNIT_OPTS} eunit skip_deps=true
upload-docs: docs
@if [ -z "${BUCKET}" -o -z "${PROJECT}" -o -z "${REVISION}" ]; then \
echo "Set BUCKET, PROJECT, and REVISION env vars to upload docs"; \
exit 1; fi
@cd doc; s3cmd put -P * "s3://${BUCKET}/${PROJECT}/${REVISION}/" > /dev/null
@echo "Docs built at: http://${BUCKET}.s3-website-us-east-1.amazonaws.com/${PROJECT}/${REVISION}"
docs:
${REBAR} doc skip_deps=true
xref: compile
${REBAR} xref skip_deps=true
PLT ?= $(HOME)/.combo_dialyzer_plt
LOCAL_PLT = .local_dialyzer_plt
DIALYZER_FLAGS ?= -Wunmatched_returns
NATIVE_EBIN ?= ./.ebin.native
DIALYZER_BIN ?= dialyzer
# Always include -pa arg in DIALYZER_CMD for speed
DIALYZER_CMD ?= $(DIALYZER_BIN) -pa $(NATIVE_EBIN)
DIALYZER_VERSION = $(shell $(DIALYZER_BIN) --version | sed 's/.* //')
ERL_LIB_DIR = $(shell erl -eval '{io:format("~s\n", [code:lib_dir()]), erlang:halt(0)}.' | tail -1)
native-ebin:
mkdir -p $(NATIVE_EBIN)
rm -f $(NATIVE_EBIN)/*.erl $(NATIVE_EBIN)/*.hrl $(NATIVE_EBIN)/*.beam
@for mod in lists dict digraph digraph_utils ets gb_sets gb_trees ordsets sets sofs; do \
cp $(ERL_LIB_DIR)/stdlib-*/src/"$$mod".erl $(NATIVE_EBIN); \
done
@for mod in cerl cerl_trees core_parse; do \
cp $(ERL_LIB_DIR)/compiler-*/src/"$$mod".?rl $(NATIVE_EBIN); \
done
@for mod in dialyzer_analysis_callgraph dialyzer dialyzer_behaviours dialyzer_codeserver dialyzer_contracts dialyzer_coordinator dialyzer_dataflow dialyzer_dep dialyzer_plt dialyzer_succ_typings dialyzer_typesig dialyzer_worker; do \
cp $(ERL_LIB_DIR)/dialyzer-*/src/"$$mod".?rl $(NATIVE_EBIN); \
done
@for mod in erl_types erl_bif_types; do \
cp $(ERL_LIB_DIR)/hipe-*/*/"$$mod".?rl $(NATIVE_EBIN); \
done
erlc -o $(NATIVE_EBIN) -smp +native -DVSN='"$(DIALYZER_VERSION)"' $(NATIVE_EBIN)/*erl
${PLT}: compile
@mkdir -p $(NATIVE_EBIN)
@if [ -f $(PLT) ]; then \
$(DIALYZER_CMD) --check_plt --plt $(PLT) --apps $(DIALYZER_APPS) && \
$(DIALYZER_CMD) --add_to_plt --plt $(PLT) --output_plt $(PLT) --apps $(DIALYZER_APPS) ; test $$? -ne 1; \
else \
$(DIALYZER_CMD) --build_plt --output_plt $(PLT) --apps $(DIALYZER_APPS); test $$? -ne 1; \
fi
${LOCAL_PLT}: compile
@mkdir -p $(NATIVE_EBIN)
@if [ -d deps ]; then \
if [ -f $(LOCAL_PLT) ]; then \
$(DIALYZER_CMD) --check_plt --plt $(LOCAL_PLT) deps/*/ebin && \
$(DIALYZER_CMD) --add_to_plt --plt $(LOCAL_PLT) --output_plt $(LOCAL_PLT) deps/*/ebin ; test $$? -ne 1; \
else \
$(DIALYZER_CMD) --build_plt --output_plt $(LOCAL_PLT) deps/*/ebin ; test $$? -ne 1; \
fi \
fi
dialyzer-run:
@mkdir -p $(NATIVE_EBIN)
@echo "==> $(shell basename $(shell pwd)) (dialyzer)"
# The bulk of the code below deals with the dialyzer.ignore-warnings file
# which contains strings to ignore if output by dialyzer.
# Typically the strings include line numbers. Using them exactly is hard
# to maintain as the code changes. This approach instead ignores the line
# numbers, but takes into account the number of times a string is listed
# for a given file. So if one string is listed once, for example, and it
# appears twice in the warnings, the user is alerted. It is possible but
# unlikely that this approach could mask a warning if one ignored warning
# is removed and two warnings of the same kind appear in the file, for
# example. But it is a trade-off that seems worth it.
# Details of the cryptic commands:
# - Remove line numbers from dialyzer.ignore-warnings
# - Pre-pend duplicate count to each warning with sort | uniq -c
# - Remove annoying white space around duplicate count
# - Save in dialyer.ignore-warnings.tmp
# - Do the same to dialyzer_warnings
# - Remove matches from dialyzer.ignore-warnings.tmp from output
# - Remove duplicate count
# - Escape regex special chars to use lines as regex patterns
# - Add pattern to match any line number (file.erl:\d+:)
# - Anchor to match the entire line (^entire line$)
# - Save in dialyzer_unhandled_warnings
# - Output matches for those patterns found in the original warnings
@if [ -f $(LOCAL_PLT) ]; then \
PLTS="$(PLT) $(LOCAL_PLT)"; \
else \
PLTS=$(PLT); \
fi; \
if [ -f dialyzer.ignore-warnings ]; then \
if [ $$(grep -cvE '[^[:space:]]' dialyzer.ignore-warnings) -ne 0 ]; then \
echo "ERROR: dialyzer.ignore-warnings contains a blank/empty line, this will match all messages!"; \
exit 1; \
fi; \
$(DIALYZER_CMD) $(DIALYZER_FLAGS) --plts $${PLTS} -c ebin > dialyzer_warnings ; \
cat dialyzer.ignore-warnings \
| sed -E 's/^([^:]+:)[^:]+:/\1/' \
| sort \
| uniq -c \
| sed -E '/.*\.erl: /!s/^[[:space:]]*[0-9]+[[:space:]]*//' \
> dialyzer.ignore-warnings.tmp ; \
egrep -v "^[[:space:]]*(done|Checking|Proceeding|Compiling)" dialyzer_warnings \
| sed -E 's/^([^:]+:)[^:]+:/\1/' \
| sort \
| uniq -c \
| sed -E '/.*\.erl: /!s/^[[:space:]]*[0-9]+[[:space:]]*//' \
| grep -F -f dialyzer.ignore-warnings.tmp -v \
| sed -E 's/^[[:space:]]*[0-9]+[[:space:]]*//' \
| sed -E 's/([]\^:+?|()*.$${}\[])/\\\1/g' \
| sed -E 's/(\\\.erl\\\:)/\1[[:digit:]]+:/g' \
| sed -E 's/^(.*)$$/^[[:space:]]*\1$$/g' \
> dialyzer_unhandled_warnings ; \
rm dialyzer.ignore-warnings.tmp; \
if [ $$(cat dialyzer_unhandled_warnings | egrep -v 'Unknown functions\\:' | wc -l) -gt 0 ]; then \
egrep -f dialyzer_unhandled_warnings dialyzer_warnings ; \
found_warnings=1; \
fi; \
[ "$$found_warnings" != 1 ] ; \
else \
$(DIALYZER_CMD) $(DIALYZER_FLAGS) --plts $${PLTS} -c ebin; \
fi
dialyzer-quick: compile-no-deps dialyzer-run
dialyzer: ${PLT} ${LOCAL_PLT} dialyzer-run
cleanplt:
@echo
@echo "Are you sure? It takes several minutes to re-build."
@echo Deleting $(PLT) and $(LOCAL_PLT) in 5 seconds.
@echo
sleep 5
rm $(PLT)
rm $(LOCAL_PLT)