RE: [PATCH] i/o errors with dm-over-md-raid0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: dm-devel-bounces@xxxxxxxxxx [mailto:dm-devel-bounces@xxxxxxxxxx] On
> Behalf Of Mikulas Patocka
> Sent: Monday, May 11, 2009 11:36 AM
> To: Alasdair G Kergon; Milan Broz
> Cc: dm-devel@xxxxxxxxxx
> Subject:  [PATCH] i/o errors with dm-over-md-raid0
> 
> Hi
> 
> This is an upstream patch for upstream for
> https://bugzilla.redhat.com/show_bug.cgi?id=223947
> 
> The RHEL-5 patch is in the bugzilla, it is different but has the same
> functionality.
> 
> Milan, if you have time, please could you (or someone else in Brno lab)
> try to reproduce the bug, then apply the patch and verify that it fixed
> it?
> 
> In short, the RHEL 5 setup is:
> * MD - RAID-0
> * lvm on the top of it
> * one of the logical volumes (linear volume) is exported to xen domU
> * inside xen domU it is partitioned, the key point is that the partition
> must be unaligned on page boundary (fdisk normally aligns the partition to
> 63 sectors, that will trigger it)
> * install the system on the partitioned disk in domU -> I/O failures in
> dom0
> 
> In upstream kernel, there are some merge changes, the bug should no longer
> happen with linear volumes, but you should be able to reproduce it if you
> use some other dm target --- dm-raid1, dm-snapshot (with chunk size larger
> than RAID-0 stripe) or dm-stripe (with stripe size larger than RAID-0
> stripe).
> 
> Mikulas
> 
> ---
> 
> Explanation of the bug and fix:
> (https://bugzilla.redhat.com/show_bug.cgi?id=223947)
> 
> In Linux bio architecture, it is the responsibility of the caller that
> he is not creating bio too large for the appropriate block device
> driver.
> 
> There are several ways how bio size can be limited.
> - There is q->max_hw_sectors that is the upper limit of total number of
>   sectors.
> - These are q->max_phys_segments and q->max_hw_segments that limit
>   number of consecutive segments (before and after iommu merging).
> - There is q->max_segment_size and q->seg_boundary_mask that determine
>   how much data fits in a segment and at which points there are enforced
>   segment boundaries (because some hardware have limitation on entries
>   in its scatter-gather table)

Mikulas,
 I am working on an issue (with our private multipath driver) where I am seeing block layer is setting the seg_boundary_mask for my virtual disk as -1 (or 0xffffffffffffffff). However, my corresponding physical disk it is being set as 0xffffffff (or 4294967295). I was expecting this value to be set by default as 0xffffffff for all the block devices. However, that is not the case. Do you expect each driver to modify these values?
-Babu Moger

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux