Re: DMMP request-queue vs. BiO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been asked to move this conversation to a public thread on the upstream email distros.

Background:

At ALPSS last month (Sept. 2024) Hannes and Christoph spoke with Chris and I about how they'd like to remove the request-interface from DMMP and asked if Red Hat would be willing to help out by running some DMMP/Bio vs. DMMP/req performance tests and share the results.The idea was: with some of the recent performance improvements in the BIO path upstream we believe there may not be much of a performance difference between these two code paths and would like Red Hat's help in demonstrating that.

So Chris and I returned to Red Hat and broached this subject here internally. The Red Hat performance team has agreed to work work with us on an ad hoc basis to do this and we've made some preliminary plans to build a test bed that can used to do some performance tests with DMMP on an upstream kernel using iSCSI and FCP. Then we talked to the DMMP guys about it. They have some questions and asked me discuss this topic in an email thread on linux-scsi, linux-block and dm-devel.

Some questions are:

What are the exact patches which make us think the BIO path is now performant?

Is it Ming's immutable bvecs and moving the splitting down to the driver?

I've been told these changes are only applicable if a filesystem is involved. Databases can make direct use of the dmmp device, so late bio splitting not applicable for them. It is filesystems that are building larger bios. See the comments from Hannes and Christoph below.

I think Red Hat can help out with the performance testing but we will need to answer some of these questions. It will also be important to determine exactly what kind of workload we should use with any DMMP performance tests. Will a simple workload generated with fio work, or do we need to test some actual data base work loads as well?

Please reply to this public thread with your thoughts and ideas.

Thanks,

John A. Meneghini
Senior Principal Platform Storage Engineer
RHEL SST - Platform Storage Group
jmeneghi@xxxxxxxxxx

On 11/5/24 05:33, Christoph Hellwig wrote:
On Tue, Nov 05, 2024 at 08:44:45AM +0100, Hannes Reinecke wrote:
I think the big change is really Ming's immutable bvecs and moving the
splitting down to the driver.  This means bios are much bigger (and
even bigger now with large folios for file systems supporting it).

Exactly. With the current code we should never merge requests; all
data should be assembled in the bio already.
(I wonder if we could trigger a WARN_ON if request merging is
attempted ...)

Request merging is obviosuly still pretty common.  For one because
a lot of crappy file systems submit a buffer_head per block (none of
the should be relevant for multipathing), but also because we reach
the bio size at some point and just need to split.  While large folios
reduce that a lot, not all file systems that matter support that.
(that what the plug callback would fix IFF it turns out to be an
issue) and last but not least I/O schedulers delay I/O to be able to
do better merging.  My theory is that this not important for the kind
of storage we use multipathing for, or rather not for the pathing
decisions.





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux