On 2022-04-24 02:00, Guoqing Jiang wrote: > > > On 4/22/22 12:02 AM, Logan Gunthorpe wrote: >> >> On 2022-04-21 02:45, Xiao Ni wrote: >>> Could you share the commands to get the test result (lock contention >>> and performance)? >> Sure. The performance we were focused on was large block writes. So we >> setup raid5 instances with varying number of disks and ran the following >> fio script directly on the drive. >> >> [simple] >> filename=/dev/md0 >> ioengine=libaio >> rw=write >> direct=1 >> size=8G >> blocksize=2m >> iodepth=16 >> runtime=30s >> time_based=1 >> offset_increment=8G >> numjobs=12 >>  >> (We also played around with tuning this but didn't find substantial >> changes once the bottleneck was hit) > > Nice, I suppose other IO patterns keep the same performance as before. > >> We tuned md with parameters like: >> >> echo 4 > /sys/block/md0/md/group_thread_cnt >> echo 8192 > /sys/block/md0/md/stripe_cache_size >> >> For lock contention stats, we just used lockstat[1]; roughly like: >> >> echo 1 > /proc/sys/kernel/lock_stat >> fio test.fio >> echo 0 > /proc/sys/kernel/lock_stat >> cat /proc/lock_stat >> >> And compared the before and after. > > Thanks for your effort, besides the performance test, please try to run > mdadm test suites to avoid regression. Yeah, is there any documentation for that? I tried to look into it but couldn't figure out how it's run. I do know that lkp-tests has run it on this series as I did get an error from it. But while I'm pretty sure that error has been resolved, I was never able to figure out how to run them locally. Thanks, Logan