Jerker Nyberg wrote:
Writes are slow, (since a client need to write to all servers, but
perhaps is it possible to stack the afrs on the server side and let
the servers do the replicating when writing... Hmmm...)
Are you using write-behind?
Yes. But it was something else I had in mind. I'll try to explain.
When reading from many nodes an afr-translator on the client with say 4
subvolumes are fine, since the data may be read directly from a single
node (which may stripe on two or more internal drives) and would fill up
1 Gbit/s to the client.
But a single client may only write at 1000/8/4 = 31 MByte/s since all
data will be sent four times, one time to each node.
So my thought was just if it is possible to combine the performance of a
server based afr-translator for writes and client-based afr-translator
for reads. I guess mounting the same files twice, once in a file system
with a server based AFR-translator (write) and once with a client based
AFR-translator (read) would get good throughput.
Have you tried chaining AFR volumes? There's quite a few ways I can
imagine reducing line saturation if that's the problem. Here's one:
server1 has an afr of a local volume and a volume from server2
server3 has an afr of a local volume and a volume from server4
client afr of server1's afr and server2's afr
This should allow a single write from a client at 1000/8/2 = 62.5
MByte/s. Client writes only twice (to server1 and server3) halving the
bandwidth. Server1 and server3 write only once each to server2 and
server4 respectively and receive the writes from the client, halving
their bandwidth.
Note: I have no idea how well this will perform in reality. There may
be enough lag in glusterfs chaining writes that the gains aren't worth
it, but I suspect since it is effectively pipelining the writes along
there won't be too much lag.
--
-Kevan Benson
-A-1 Networks