Tsukasa Morii wrote:
Let me explain a bit more.
<2> All data written on the unified volume is copied to a different
server by using a configuration file on server side.
In other words, I don't want a file under one unified volume to be
copied among four servers from the standpoint of network bandwidth,
but I need the minimum redundancy that sytem still works when one of
four servers stops, but donesn't work in case that two of four servers
fails.
Please convert to actual volume specs, this is shorthand. Untested, but
this should get you the idea. I'm doing the unify and afr on the client
(it's easier to represent), you could implement it fully or partially on
the server as well.
SERVERS:
storage/posix: /data/namespace # Only on SERVER_1 and SERVER_2
storage/posix: /data/share-1
storage/posix: /data/share-2
CLIENTS:
protocol/client: SERVER_1 -> ns1
protocol/client: SERVER_2 -> ns2
protocol/client: SERVER_1 -> S1
protocol/client: SERVER_2 -> S2
protocol/client: SERVER_3 -> S3
protocol/client: SERVER_4 -> S4
cluster_afr: ns1, ns2 -> ns
cluster_afr: S1, S2 -> AFR1
cluster_afr: S2, S3 -> AFR2
cluster_afr: S3, S4 -> AFR3
cluster_afr: S4, S1 -> AFR4
cluster/unify: namespace(ns) S1, S2, S3, S4
I may have forgotten something, but I think that covers most of it.
--
-Kevan Benson
-A-1 Networks