[PATCH] dm-raid: set discard_granularity non-zero if possible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I hope this address is the right place for this patch.
It is supposed to fix the triggering of block/blklib.c:51 WARN_ON_ONCE(..) when using LVM2 raid1 with SSD-PVs.
Since commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f and without this patchthere are tons of printks logging "Error: discard_granularity is 0." to kmsg.
Also there is no discard/TRIM happening anymore...

This is a rough patch for WARNING-issue

"block/blk-lib.c:51 __blkdev_issue_discard+0x1f6/0x250"
[...] "Error: discard_granularity is 0." [...]
introduced in commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f
("block: check queue's limits.discard_granularity in __blkdev_issue_discard()")

in conjunction with LVM2 raid1 volumes on discardable (SSD) backing.
It seems until now, LVM-raid1 reported "discard_granularity" as 0,
as well as "max_discard_sectors" as 0. (see "lsblk --discard").

The idea here is to fix the issue by calculating "max_discard_sectors"
as the minimum over all involved block devices. (We use the meta-data
for this to work here.)
For calculating the "discard_granularity" we would have to calculate the
lcm (least common multiple) of all discard_granularities of all involved
block devices and finally round up to next power of 2.

However, since all "discard_granularity" are powers of 2, this algorithm
will simplify to just determining the maximum and filtering for "0"-cases.

Signed-off-by: Stephan Baerwolf <stephan@xxxxxxxxxxxxxxx>
---
drivers/md/dm-raid.c | 32 ++++++++++++++++++++++++++++++--
1 file changed, 30 insertions(+), 2 deletions(-)



diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 8d2b835d7a10..4c769fd93ced 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -3734,8 +3734,36 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	 * RAID0/4/5/6 don't and process large discard bios properly.
 	 */
 	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
-		limits->discard_granularity = chunk_size_bytes;
-		limits->max_discard_sectors = rs->md.chunk_sectors;
+        /* HACK */
+        if (chunk_size_bytes==0) {
+            unsigned int i, chunk_sectors=(UINT_MAX >>  SECTOR_SHIFT);
+            struct request_queue *q = NULL;
+
+            DMINFO("chunk_size is 0 for raid1 - preventing issue with TRIM");
+
+            for (i=0;i<rs->raid_disks;i++) {
+                q=bdev_get_queue(rs->dev[i].meta_dev->bdev);
+                if (chunk_sectors >  q->limits.max_discard_sectors) {
+                    chunk_sectors = q->limits.max_discard_sectors;
+                }
+                if (chunk_size_bytes < q->limits.discard_granularity) {
+                    chunk_size_bytes = q->limits.discard_granularity;
+                }
+
+                /* lcm(x,y,...,0) = 0 */
+                if (q->limits.discard_granularity == 0) {
+                    chunk_size_bytes = 0;
+                    break;
+                }
+            }
+
+            limits->discard_granularity = chunk_size_bytes;
+            limits->max_discard_sectors = chunk_sectors;
+        /* end of HACK (but not of if) */
+        } else {
+            limits->discard_granularity = chunk_size_bytes;
+            limits->max_discard_sectors = rs->md.chunk_sectors;
+        }
 	}
 }
 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux