Re: md bitmap writes random memory over disks' bitmap sectors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

在 2025/02/25 23:32, Nigel Croxon 写道:
-       md_super_write(mddev, rdev, sboff + ps, (int) size, page);
+       md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
         return 0;

This patch still will attempt to send writes greater than a page using only a single page pointer for multi-page bitmaps. The bitmap uses an array (the filemap) of pages when the bitmap cannot fit in a single page. These pages are allocated separately and not guaranteed to be contiguous. So this patch will keep writes in a multi-page bitmap from trashing data beyond the bitmap, but can create writes which corrupt other parts of the bitmap with random memory.

Is this problem introduced by:

8745faa95611 ("md: Use optimal I/O size for last bitmap page")


The opt using logic in this function is fundamentally flawed as __write_sb_page should never send a write bigger than a page at a time. It would need to use a new interface which can build multi-page bio and not md_super_write() if it wanted to send multi-page I/Os.

I argree. And I don't understand that patch yet, it said:

If the bitmap space has enough room, size the I/O for the last bitmap
page write to the optimal I/O size for the storage device.

Does this mean, for example, if bitmap space is 128k, while there is
only one page, means 124k is not used. In this case, if device opt io
size is 128k, this patch will choose to issue 128k IO instead of just
4k IO? And how can this improve performance ...

Thanks,
Kuai





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux