Re: dm-raid: set discard_granularity non-zero if possible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 16 2020 at  2:53pm -0500,
Stephan Bärwolf <stephan@xxxxxxxxxxxxxxx> wrote:

> Hi
> 
> I hope this address is the right place for this patch.
> It is supposed to fix the triggering of block/blklib.c:51 WARN_ON_ONCE(..) when using LVM2 raid1 with SSD-PVs.
> Since commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f and without this patchthere are tons of printks logging "Error: discard_granularity is 0." to kmsg.
> Also there is no discard/TRIM happening anymore...
> 
> This is a rough patch for WARNING-issue
> 
> "block/blk-lib.c:51 __blkdev_issue_discard+0x1f6/0x250"
> [...] "Error: discard_granularity is 0." [...]
> introduced in commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f
> ("block: check queue's limits.discard_granularity in __blkdev_issue_discard()")
> 
> in conjunction with LVM2 raid1 volumes on discardable (SSD) backing.
> It seems until now, LVM-raid1 reported "discard_granularity" as 0,
> as well as "max_discard_sectors" as 0. (see "lsblk --discard").
> 
> The idea here is to fix the issue by calculating "max_discard_sectors"
> as the minimum over all involved block devices. (We use the meta-data
> for this to work here.)
> For calculating the "discard_granularity" we would have to calculate the
> lcm (least common multiple) of all discard_granularities of all involved
> block devices and finally round up to next power of 2.
> 
> However, since all "discard_granularity" are powers of 2, this algorithm
> will simplify to just determining the maximum and filtering for "0"-cases.
> 
> Signed-off-by: Stephan Baerwolf <stephan@xxxxxxxxxxxxxxx>
> ---
> drivers/md/dm-raid.c | 32 ++++++++++++++++++++++++++++++--
> 1 file changed, 30 insertions(+), 2 deletions(-)
> 
> 
> 

> diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
> index 8d2b835d7a10..4c769fd93ced 100644
> --- a/drivers/md/dm-raid.c
> +++ b/drivers/md/dm-raid.c
> @@ -3734,8 +3734,36 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
>  	 * RAID0/4/5/6 don't and process large discard bios properly.
>  	 */
>  	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
> -		limits->discard_granularity = chunk_size_bytes;
> -		limits->max_discard_sectors = rs->md.chunk_sectors;

The above should be: if (rs_is_raid0(rs) || rs_is_raid10(rs)) {

And this was previous;y fixed with commit e0910c8e4f87bb9 but later
reverted due to various late MD discard reverts at the end of the 5.10
release.

So all said, I think the the proper fix (without all sorts of
open-coding to get limits to properly stack) is to change
raid_io_hints()'s rs_is_raid1() call to rs_is_raid0().

I'll get a fix queued up.

Mike

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux