Re: afr logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ops, my English...

the question is: in your terms I have RAID 0+1 (RAID 10 is a trivial case),
i.e. afr over stripe, if one brick is repaired, is its complement to be
repaired too? I have some doubts about how stripe algorithm slices files, if
it depends on parameters not equal for replicas (e.g. load average) then
complement is to be copied too even if configuration of stripe for bouth
replicas is the same. Even it is so I want to hear it explicitly from
glusterfs team with promise don't change the policy!

Regards, Alexey.

On 10/17/07, Kevan Benson <kbenson@xxxxxxxxxxxxxxx> wrote:
>
> Alexey Filin wrote:
> > On 10/17/07, Kevan Benson <kbenson@xxxxxxxxxxxxxxx> wrote:
> >
> >
> >> The rsync case can probably be handled through a separate find of the
> >> appropriate attributes on the source and set on the target.  A simple
> >> bash/perl script could handle this in a few lines.
> >>
> >> The fsck case is more interesting, but if you could get fsck to report
> >> file/directory names that have problems and not fix them, it's easy to
> >> pipe that to a script to remove the trusted.afr.version attribute on
> the
> >> files and then the AFR will heal itself.
> >>
> >
> >
> > didn't check, may be you know, is the second healthy pair in
> cluster/stripe
> > (if two bricks are used to stripe) in the case to be copied too? (of
> course
> > afr'ed volumes use the same underlying cluster/stripe configuration)
> >
>
> It probably has to do with whether you stripe an afr or afr some
> stripes.  Think RAID 10 compared to RAID 0+1.
>
> --
>
> -Kevan Benson
> -A-1 Networks
>


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux