Re: [PATCH V2] raid6: Add RISC-V SIMD syndrome and recovery calculations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 27, 2025 at 09:39:11AM +0100, Paul Menzel wrote:
> Dear Chunyan,
> 
> 
> Thank you for the patch.
> 
> 
> Am 27.01.25 um 07:15 schrieb Chunyan Zhang:
> > The assembly is originally based on the ARM NEON and int.uc, but uses
> > RISC-V vector instructions to implement the RAID6 syndrome and
> > recovery calculations.
> > 
> > Results on QEMU running with the option "-icount shift=0":
> > 
> >    raid6: rvvx1    gen()  1008 MB/s
> >    raid6: rvvx2    gen()  1395 MB/s
> >    raid6: rvvx4    gen()  1584 MB/s
> >    raid6: rvvx8    gen()  1694 MB/s
> >    raid6: int64x8  gen()   113 MB/s
> >    raid6: int64x4  gen()   116 MB/s
> >    raid6: int64x2  gen()   272 MB/s
> >    raid6: int64x1  gen()   229 MB/s
> >    raid6: using algorithm rvvx8 gen() 1694 MB/s
> >    raid6: .... xor() 1000 MB/s, rmw enabled
> >    raid6: using rvv recovery algorithm
> 
> How did you start QEMU and on what host did you run it? Does it change
> between runs? (For me these benchmark values were very unreliable in the
> past on x86 hardware.)

I reported dramatic gains on vector as well in this response [1]. Note
that these gains are only present when using the QEMU option "-icount
shift=0" vector becomes dramatically more performant. Without this
option we do not see a performance gain on QEMU. However riscv vector is
known to not be less optimized on QEMU so having vector be less
performant on some QEMU configurations is not necessarily representative
of hardware implementations.


My full qemu command is (running on x86 host):

qemu-system-riscv64 -nographic -m 1G -machine virt -smp 1\
    -kernel arch/riscv/boot/Image \
    -append "root=/dev/vda rw earlycon console=ttyS0" \
    -drive file=rootfs.ext2,format=raw,id=hd0,if=none \
    -bios default -cpu rv64,v=true,vlen=256,vext_spec=v1.0 \
    -device virtio-blk-device,drive=hd0

This is with version 9.2.0.


I am also facing this issue when executing this:

raid6: rvvx1    gen()   717 MB/s
raid6: rvvx2    gen()   734 MB/s
Unable to handle kernel NULL pointer dereference at virtual address 0000000000000020

Only rvvx4 is failing. I applied this patch to 6.13.

- Charlie





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux