Re: DMMP request-queue vs. BiO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi


On Thu, 7 Nov 2024, John Meneghini wrote:

> I've been asked to move this conversation to a public thread on the upstream
> email distros.
> 
> Background:
> 
> At ALPSS last month (Sept. 2024) Hannes and Christoph spoke with Chris and I
> about how they'd like to remove the request-interface from DMMP and asked if
> Red Hat would be willing to help out by running some DMMP/Bio vs. DMMP/req
> performance tests and share the results.The idea was: with some of the recent
> performance improvements in the BIO path upstream we believe there may not be
> much of a performance difference between these two code paths and would like
> Red Hat's help in demonstrating that.
> 
> So Chris and I returned to Red Hat and broached this subject here internally.
> The Red Hat performance team has agreed to work work with us on an ad hoc
> basis to do this and we've made some preliminary plans to build a test bed
> that can used to do some performance tests with DMMP on an upstream kernel
> using iSCSI and FCP. Then we talked to the DMMP guys about it. They have some
> questions and asked me discuss this topic in an email thread on linux-scsi,
> linux-block and dm-devel.
> 
> Some questions are:
> 
> What are the exact patches which make us think the BIO path is now performant?

There are too many changes that help increasing bio size, so it's not 
possible to pick one or a few patches.

> Is it Ming's immutable bvecs and moving the splitting down to the driver?

Yes, splitting bios at the driver helps.

Folios also help with using larger bio size.

> I've been told these changes are only applicable if a filesystem is involved.
> Databases can make direct use of the dmmp device, so late bio splitting not
> applicable for them. It is filesystems that are building larger bios. See the
> comments from Hannes and Christoph below.

Databases should use direct I/O and with direct I/O, they can generate as 
big bios as they want.

Note, that if a database uses buffered block device, performance will be 
suboptimal, because the buffering mechanism can't create large bios, it 
only sends page-sized bios. But that is expected to not be used - the 
database should either use a block device with direct I/O or a filesystem 
with or without direct I/O.

> I think Red Hat can help out with the performance testing but we will need to
> answer some of these questions. It will also be important to determine exactly
> what kind of workload we should use with any DMMP performance tests. Will a
> simple workload generated with fio work, or do we need to test some actual
> data base work loads as well?

I suggest to use some real-world workload - you can use something that you 
already use to verify the performance of RHEL.

The problem with fio is that it generates I/O at random locations, so 
there is no bio merging possible, so it will show just the IOPS value of 
the underlying storage device.

> Please reply to this public thread with your thoughts and ideas.
> 
> Thanks,
> 
> John A. Meneghini
> Senior Principal Platform Storage Engineer
> RHEL SST - Platform Storage Group
> jmeneghi@xxxxxxxxxx

Mikulas





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux