Re: Question about using a DMA/XOR offload engine for md raid5/6.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/11/2022 12:53 PM, Don.Brace@xxxxxxxxxxxxx wrote:
Thanks for your reply.

I'm running on a ProLiant ML110 Gen10 with Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
So that's a Cascade Lake CPU. XOR and PQ are not officially supported with IOATDMA of that CPU. Also even if it were, you need a pretty old kernel to get that support (maybe 4.x kernel?). It's been deprecated for a while now.

I enabled the dmatest driver and the ioat driver and see this when I load the dmatest driver with
modprobe dmatest timeout=2000 iterations=1 channel=dma0chan0 run=1


[ 3527.129243] ioatdma 0000:00:04.0: ioat_check_space_lock: num_descs: 1 (1:1:1)
[ 3527.129256] ioatdma 0000:00:04.0: desc[1]: (0xfff00040->0xfff00080) cookie: 0 flags: 0x0 ctl: 0x00000000 (op: 0x0 int_en: 0 compl: 0)
[ 3527.129268] ioatdma 0000:00:04.0: desc[1]: (0xfff00040->0xfff00080) cookie: 0 flags: 0x3 ctl: 0x00000009 (op: 0x0 int_en: 1 compl: 1)
[ 3527.129276] ioatdma 0000:00:04.0: ioat_tx_submit_unlock: cookie: 4
[ 3527.129282] ioatdma 0000:00:04.0: __ioat_issue_pending: head: 0x2 tail: 0x1 issued: 0x2 count: 0x2
[ 3527.129289] ioatdma 0000:00:04.0: ioat_get_current_completion: phys_complete: 0xfff00040
[ 3527.129295] ioatdma 0000:00:04.0: __cleanup: head: 0x2 tail: 0x1 issued: 0x2
[ 3527.129300] ioatdma 0000:00:04.0: desc[1]: (0xfff00040->0xfff00080) cookie: 4 flags: 0x3 ctl: 0x00000009 (op: 0x0 int_en: 1 compl: 1)
[ 3527.129310] ioatdma 0000:00:04.0: __cleanup: cancel completion timeout
[ 3527.129321] dmatest: dma0chan0-copy0: verifying source buffer...
[ 3527.129376] dmatest: dma0chan0-copy0: verifying dest buffer...
[ 3527.129429] dmatest: dma0chan0-copy0: result #1: 'test passed' with src_off=0xa64 dst_off=0xe7c len=0x17ec (0)
[ 3527.129439] dmatest: dma0chan0-copy0: summary 1 tests, 0 failures 9523.80 iops 47619 KB/s (0)


Is there a better way to enable tracing to follow what the md raid456 driver is doing?

Maybe look at event tracing for block? I see some trace calls in drivers/md/



From: Dave Jiang <dave.jiang@xxxxxxxxx>
Sent: Wednesday, August 10, 2022 7:27 PM
To: Don Brace - C33706 <Don.Brace@xxxxxxxxxxxxx>; dmaengine@xxxxxxxxxxxxxxx <dmaengine@xxxxxxxxxxxxxxx>
Subject: Re: Question about using a DMA/XOR offload engine for md raid5/6.
[You don't often get email from dave.jiang@xxxxxxxxx. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe

On 8/10/2022 3:45 PM, Don.Brace@xxxxxxxxxxxxx wrote:
I have been reading the kernel documentation about using a dmengine provider/client to see if the md driver will utilize a DMA engine when doing XOR and Crypto operations.

I notice that the drivers/md/raid5.c calls async_xor_offs() which is found in crypto/async_tx/async_xor.c and it is calling async_tx_find_channel().
So, I think the answer is yes if a DMA engine is enabled in the kernel.

Is this correct? I did some tracing while doing I/O to my raid5 with
crypto enabled and see the above functions called but unsure of how data
    flows through each driver and if I am even using a DMA offload.
What platform are you running on? There are some ARM SOC DMA engines
that support XOR such as the Marvell chip (mv_xor) as I recall. Intel
Xeon platforms supported that long while ago. But that has been removed
since Skylake. If you do a grep in drivers/dma/ for DMA_XOR where the
driver calls dma_cap_set(DMA_XOR, ...) you can see which drivers
supports RAID5 offload.


I have the following drivers loaded:
lsmod | grep raid
raid456               188416  2
async_raid6_recov      24576  1 raid456
async_memcpy           20480  2 raid456,async_raid6_recov
async_pq               20480  2 raid456,async_raid6_recov
async_xor              20480  3 async_pq,raid456,async_raid6_recov
async_tx               20480  5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov
raid6_pq              122880  3 async_pq,raid456,async_raid6_recov
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,raid456

Is there a diagram somewhere that provides any details?

I made the raid5 with crypto using
mdadm
    --create /dev/md/raid5 --force --assume-clean --verbose --level=5
--chunk=512K --metadata=1 --data-offset=2048s --raid-devices=5
/dev/mapper/mpathb /dev/mapper/mpathc /dev/mapper/mpathd
/dev/mapper/mpathe /dev/mapper/mpathl
cryptsetup -v luksFormat /dev/md/raid5Crypto
cryptsetup luksOpen  /dev/md/raid5Crypto testCrypto
mkfs.ext4 /dev/mapper/testCrypto

modprobe dmatest timeout=2000 iterations=1 channel=dma0chan0 run=1



[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux