Re: afr logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hans Einar Gautun wrote:
Hi, and thanks for all good and tested tricks and facts;)

tir, 16,.10.2007 kl. 12.59 -0700, skrev Kevan Benson:
When an afr encounters a file that exists on multiple shares that doesn't have the trusted.afr.version set, it sets that attribute for sll the files and assumes they contain the same data.

I.e. if you manually create the files on the servers directly and with different content, appending to the file through the client will set the trusted.afr.version for both files, and append to both files, but the files still contain different content (the content from before the append).

Now, this would be really hard to replicate without this arbitrary example, it would probably require a write fail to all afr subvolumes, possibly at different times of the write operation, in which case the file content can't be trusted anyways, so it's really not a big deal. I

If I don't misunderstand this: I AFR for mirroring and HA. One side goes
down - maintenance or (worse) HW crash. Some parts of some files
somewhere are written, but maybe not all. When the server is coming back
AFR selfheal can't really replicate the file, but will just append some
of it? Or am I misunderstanding this?

Slightly. It seems when gluster finishes writing a file to an afr subvolume, it sets the trusted.afr.version attribute for tracking when files need to be updated, which is the most current, etc. This is specific to the case where it isn't set on any afr subvoume, which would require writing for fail to all of them for some reason. In that case, if a file with the same name exists on both afr subvolumes without the trusted.afr.version attribute, glusterfs doesn't sync the files one a read, it just sets an initial trusted.afr.version on both files and assumes they are the same.

The only working case I can think of that would trigger this is if there was a complete failure of all afr subvolumes. The only other case where people might run into this and not realize it would be if they were pre-populating the afr subvolumes before going live (for example, using rsync to replicate the active share that's being replaced by glusterfs). This case could be worked around with a find that set the trusted.afr.version attribute on all the files in the pre-populated subvolume that's the newest in the afr.

It's really a non-issue, I just figured I'd mention it so people were aware in case they ran into this, and in case the glusterfs team decided this wasn't the preferred behavior for afr's when encountering files on subvolumes that have no afr version tracking information.

--

-Kevan Benson
-A-1 Networks




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux