Re: Re; Load balancing ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>It would certainly ber beneficial in the cases when the network speed is slow (e.g. WAN replication).

So long as it's server side AFR and not client-side ... ? I'm guessing there would need to be some server side logic to ensure that local servers generated their own hashes and only exchanged the hashes over the network rather than the data ?

>Journal per se wouldn't work, because that implies fixed size and write-ahead logging. 
>What would be required here is more like the snapshot style undo logging.

A journal wouldn't work ?!
You mean it's effectiveness would be governed by it's size?

>1) Categorically establish whether each server is connected and up to date 
>for the file being checked, and only log if the server has disconnected. 
>This involves overhead.

Surely you would log anyway, as there could easily be latency between an actual "down" and one's ability to detect it .. in which case detecting whether a server has disconnected it a moot point. In terms of the overhead of logging, I guess this would be a decision for the sysadmin concerned, whether the overhead of logging to a journal was worthwhile .vs. the potential issues involved in recovering from an outage?

>From my point of view, if journaling halved my write performance (which it wouldn't) I wouldn't even have to think about it.

>2) For each server that is down at the time, each other server would have 
>to start writing the snapshot style undo logs (which would have to be 
>per server) for all the files being changed. This effectively multiplies 
>the disk write-traffic by the number of offline servers on all the working 
>up to date servers.

Yes, it will without question increase write traffic and network overheads.
Certainly in some circumstances you may not want this .. but if we're taking a journal translator .. then it's optional (!)

>The problem that arises then is that the fast(er) resyncs on small changes 
>come at the cost of massive slowdown in operation when you have multiple 
>downed servers. As the number of servers grows, this rapidly stops being a 
>workable solution.

Ok, I don't know about anyone else, but my setups all rely on consistency rather than peaks and troughs. I'd far rather run a journal at half potential speed, and have everything run at that speed all the time .. than occasionally have to stop the entire setup while the system recovers, or essentially wait for 5-10 minutes while the system re-syncs after a node is reloaded.

If some people can benefit from the hash/rsync recovery, that's great .. but just to spread the context a little, it would be of zero use in any of the scenario's I work with .. 

(note; this isn't "I need this for me" argument, I don't actually use Gluster as a "live" solution for anything .. however I do have half a dozen situations where I have tried to use Gluster and where it would be "nice" to use Gluster one day - once the wrinckles are out ...)

Gareth.

----- Original Message -----
From: gordan@xxxxxxxxxx
To: gluster-devel@xxxxxxxxxx
Sent: Wednesday, April 30, 2008 12:52:55 PM GMT +00:00 GMT Britain, Ireland, Portugal
Subject: Re: Re; Load balancing ...

On Wed, 30 Apr 2008, Gareth Bult wrote:

> Sorry, I'm trying to follow this but I'm coming a little unstuck ..
>
> Am I right in thinking the rolling hash / rsync solution would involve 
> syncing the file "on open" as per the current system .. and in order to 
> do this, the server would have to read through the entire file in order 
> to create the hashes?
> (indeed it would need to do this on two servers to create hashes for comparison?)

Yes.

> So .. as a rough benchmark .. assume 50Mb/sec for a standard / modern 
> SATA drive, opening a crashed 20G file is going to take 400 seconds or 
> six minutes ... ? (which would also flatten two servers for the 
> duration)

It would certainly ber beneficial in the cases when the network speed is 
slow (e.g. WAN replication).

> Whereas a journal replay of 10M is going to take < 1s and be effectively transparent.
> (I'm guessing this could also be done at open time ??)

Journal per se wouldn't work, because that implies fixed size and 
write-ahead logging. What would be required here is more like the 
snapshot style undo logging.

The problem with this is that you have to:

1) Categorically establish whether each server is connected and up to date 
for the file being checked, and only log if the server has disconnected. 
This involves overhead.

2) For each server that is down at the time, each other server would have 
to start writing the snapshot style undo logs (which would have to be 
per server) for all the files being changed. This effectively multiplies 
the disk write-traffic by the number of offline servers on all the working 
up to date servers.

The problem that arises then is that the fast(er) resyncs on small changes 
come at the cost of massive slowdown in operation when you have multiple 
downed servers. As the number of servers grows, this rapidly stops being a 
workable solution.

Gordan


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux