Re: [PATCH] md: Use optimal I/O size for last bitmap page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 16, 2023 at 3:52 PM Jonathan Derrick
<jonathan.derrick@xxxxxxxxx> wrote:
>
>
>
> On 2/10/2023 10:32 AM, Song Liu wrote:
> > Hi Jonathan,
> >
> > On Thu, Feb 9, 2023 at 12:38 PM Jonathan Derrick
> > <jonathan.derrick@xxxxxxxxx> wrote:
> >>
> >> Hi Song,
> >>
> >> Any thoughts on this?
> >
> > I am really sorry that I missed this patch.
> >
> >>
> >> On 1/17/2023 5:53 PM, Jonathan Derrick wrote:
> >>> From: Jon Derrick <jonathan.derrick@xxxxxxxxx>
> >>>
> >>> If the bitmap space has enough room, size the I/O for the last bitmap
> >>> page write to the optimal I/O size for the storage device. The expanded
> >>> write is checked that it won't overrun the data or metadata.
> >>>
> >>> This change helps increase performance by preventing unnecessary
> >>> device-side read-mod-writes due to non-atomic write unit sizes.
> >>>
> >>> Ex biosnoop log. Device lba size 512, optimal size 4k:
> >>> Before:
> >>> Time        Process        PID     Device      LBA        Size      Lat
> >>> 0.843734    md0_raid10     5267    nvme0n1   W 24         3584      1.17
> >>> 0.843933    md0_raid10     5267    nvme1n1   W 24         3584      1.36
> >>> 0.843968    md0_raid10     5267    nvme1n1   W 14207939968 4096      0.01
> >>> 0.843979    md0_raid10     5267    nvme0n1   W 14207939968 4096      0.02
> >>>
> >>> After:
> >>> Time        Process        PID     Device      LBA        Size      Lat
> >>> 18.374244   md0_raid10     6559    nvme0n1   W 24         4096      0.01
> >>> 18.374253   md0_raid10     6559    nvme1n1   W 24         4096      0.01
> >>> 18.374300   md0_raid10     6559    nvme0n1   W 11020272296 4096      0.01
> >>> 18.374306   md0_raid10     6559    nvme1n1   W 11020272296 4096      0.02
> >
> > Do we see significant improvements from io benchmarks?
> Yes. With lbaf=512, optimal i/o size=4k:
>
> Without patch:
>   write: IOPS=1570, BW=6283KiB/s (6434kB/s)(368MiB/60001msec); 0 zone resets
> With patch:
>   write: IOPS=59.7k, BW=233MiB/s (245MB/s)(13.7GiB/60001msec); 0 zone resets

The difference is much bigger than I expected. Given this big improvements, I
think we should ship this improvement. Unfortunately, we are too late for 6.3.
Let's plan to ship this in the 6.4 release. I am on vacation this
week. I will work
on this afterwards.

Thanks,
Song



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux