Re: [PATCH] [v2]aacraid: Reply queue mapping to CPUs based on IRQ affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/10/25 12:44 PM, Hannes Reinecke wrote:
On 2/24/25 22:15, John Meneghini wrote:
On 2/20/25 9:38 PM, Martin K. Petersen wrote:
If go-faster stripes are desired in specific configurations, then make
the performance mode an opt-in. Based on your benchmarks, however, I'm
not entirely convinced it's worth it...

I agree.  So how about if we can just take out the aac_cpu_offline_feature modparam...?

Alternatively we can replace the modparam with a kConfig option. The default setting for the new Kconfig option will be offline_cpu_support_on and performance_mode_off. That way we can ship a default kernel configuration that
provides a working aacraid driver which safely supports off-lining
CPUS. If people are really unhappy with the performance, and they
don't care about offline cpu support, they can re-config their kernel.

Personally I prefer option 1, but we the thoughts of the upstream users.

I've added the original authors of Bugzilla 217599[1] to the cc list to
get their attention and review.

Do we have an idea what these 'specific use-cases' are?

Yes. The use case is offline CPU support.  We have customers who are using the aacraid driver to support their main
storage. They have hundreds of system deployed like this and they started using the offline CPU function and found
the problem in Bugzilla 217599. The customer is currently using this patch (minus the modparam) and it solves there
problem.

And how much performance impact we have?

This was discussed earlier in this thread.

With aac_cpu_offline_feature=1 fio statistics show:

# fio -filename=/home/test1G.img -iodepth=64 -thread -rw=randwrite -ioengine=libaio -bs=4K -direct=1 -runtime=300 -time_based -size=1G -group_reporting -name=mytest -numjobs=4

  WRITE: bw=495MiB/s (519MB/s), 495MiB/s-495MiB/s (519MB/s-519MB/s), io=145GiB (156GB), run=300001-300001msec

With aac_cpu_offline_feature=0 fio statistics show:

# fio -filename=/home/test1G.img -iodepth=64 -thread -rw=randwrite -ioengine=libaio -bs=4K -direct=1 -runtime=300 -time_based -size=1G -group_reporting -name=mytest -numjobs=4

  WRITE: bw=505MiB/s (529MB/s), 505MiB/s-505MiB/s (529MB/s-529MB/s), io=148GiB (159GB), run=300001-300001msec

Of course this is with a very primitive test.  As always your performance results will vary based upon workload, system size, etc..

Our customer reported the following results with this patch when aac_cpu_offline_feature=1.  This was with their specific workload.

The test configuration is: 3x Disk Raid 5:

Chunk/Stripe Size:
Stripe-unit size : 256 KB
Full Stripe Size : 512 KB

Description	Unpatched	Patched
Random reads	103K		114K
clat avg	2500		2100

Description	Unpatched	Patched
Random writes	17.7		18K
clat avg	14400		13300

fio was used to perform 4k random io with 16 jobs and iodepth of 16 which mimics the
customer's working environment/application io.

I could imagine a single-threaded workload driving just one blk-mq queue would benefit from spreading out onto several interrupts.

Yes, I think the performance results with this patch can vary greatly.
But then, this would be true for most of the multiqueue drivers; and indeed quite some drivers (eg megaraid_sas & mpt3sas 'smp_affinity_enable') have the very same module option.
OK, fine... but until that option is availble... I think we need to do something with this driver.

Wouldn't it be an idea to check if we can make this a generic / blk-mq
queue option instead of having each driver to implement the same functionality on it's own?

Topic for LSF?

I'd be happy to talk about this at LSF.
Cheers,

Hannes





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux