Re: Replication delay

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> From: "Vijay Bellur" <vbellur@xxxxxxxxxx>
> To: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
> Cc: "Fabio Rosati" <fabio.rosati@xxxxxxxxxxxxxxxxx>, "Gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
> Sent: Saturday, January 25, 2014 3:32:24 PM
> Subject: Re:  Replication delay
> 
> On 01/25/2014 02:28 PM, Pranith Kumar Karampuri wrote:
> > Vijay,
> >       But it seems like self-heal's fd is able to perform 'writes'.
> >       Shouldn't it be uniform if it is the problem with xfs?
> 
> The problem is not with xfs alone. It is due to a combination of several
> factors including disk sector size, xfs sector size and the nature of
> writes being performed. With cache=none, qemu does O_DIRECT open() which
> necessitates proper alignment for write operations to happen
> successfully. Self-heal does not open() with O_DIRECT and hence write
> operations initiated by self-heal go through.

I was also guessing it could be related to O_DIRECT. Anyway to fix that?
Wonder why it has to happen only on one of the bricks.

Pranith
> 
> -Vijay
> 
> >
> > Pranith
> > ----- Original Message -----
> >> From: "Vijay Bellur" <vbellur@xxxxxxxxxx>
> >> To: "Fabio Rosati" <fabio.rosati@xxxxxxxxxxxxxxxxx>, "Pranith Kumar
> >> Karampuri" <pkarampu@xxxxxxxxxx>
> >> Cc: "Gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
> >> Sent: Saturday, January 25, 2014 1:23:52 PM
> >> Subject: Re:  Replication delay
> >>
> >> On 01/24/2014 09:24 PM, Fabio Rosati wrote:
> >>
> >>>
> >>>
> >>> The block size is the same, 4096 bytes.
> >>> I did some other investigation and it seems the problem happens only with
> >>> VM disk images internally formatted with a blocksize of 1024 bytes. There
> >>> are no problems with disk images formatted with a block size on 4096
> >>> bytes. Anyway, I don't know if this is a coincidence.
> >>>
> >>> Do you think this could be the origin of the problem? If so, how can I
> >>> solve it?
> >>> In the links posted by Vijay someone suggests to start the VM with cache
> >>> !=
> >>> none but this will prevent live migration, AFAIK.
> >>> Another solution may be to recreate the volume backing it with XFS
> >>> partitions formatted with a different block size (smaller? 1024 bytes?),
> >>> this would be a painful option, but if this will solve the problem, I go
> >>> for it.
> >>>
> >>
> >> A lower sector size (512) for xfs has been observed to be useful in
> >> overcoming this problem.
> >>
> >> Another solution might be to use logical_block_size=4096 option as
> >> referred here [1].
> >>
> >> -Vijay
> >>
> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=997839#c7
> >>
> >>
> >
> >
> 
> 
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux