Nr_requests mdraid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All,
I'm working on creating raid5 or raid6 arrays of 800K IOP nvme drives.   Each of the drives performs well with a queue depth of 128 and I set to 1023 if allowed.   In order for me to try to max out the queue depth on each RAID member, so I'd like to set the sysfs nr_requests on the md device to something greater than 128, like #raid members * 128.   Even though /sys/block/md127/queue/nr_requests is mode 644, when I try to change nr_requests in any way as root, I get write error: invalid argument.  When I'm hitting the md device with random reads, my nvme drives are 100% utilized, but only doing 160K IOPS because they have no queue depth.   

Am I doing something silly?   
Regards,
Jim Finlayson

[root@hp-dl325 ~]# cd /sys/block/md127/queue
[root@hp-dl325 queue]# ls
add_random            discard_zeroes_data  io_timeout              max_segments      optimal_io_size      unpriv_sgio
chunk_sectors         fua                  logical_block_size      max_segment_size  physical_block_size  wbt_lat_usec
dax                   hw_sector_size       max_discard_segments    minimum_io_size   read_ahead_kb        write_cache
discard_granularity   io_poll              max_hw_sectors_kb       nomerges          rotational           write_same_max_bytes
discard_max_bytes     io_poll_delay        max_integrity_segments  nr_requests       rq_affinity          write_zeroes_max_bytes
discard_max_hw_bytes  iostats              max_sectors_kb          nr_zones          scheduler            zoned
[root@hp-dl325 queue]# cat nr_requests
128
[root@hp-dl325 queue]# ls -l nr_requests
-rw-r--r-- 1 root root 4096 Oct 21 18:55 nr_requests
[root@hp-dl325 queue]# echo 1023 > nr_requests
-bash: echo: write error: Invalid argument
[root@hp-dl325 queue]# echo 128 > nr_requests
-bash: echo: write error: Invalid argument
[root@hp-dl325 queue]# pwd
/sys/block/md127/queue
[root@hp-dl325 queue]#

[root@hp-dl325 queue]# mdadm --version
mdadm - v4.1 - 2018-10-01
[root@hp-dl325 queue]# uname -r
4.18.0-193.el8.x86_64
[root@hp-dl325 queue]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.2 (Ootpa)
[root@hp-dl325 queue]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0]
md126 : active raid5 nvme9n1[10] nvme8n1[8] nvme7n1[7] nvme6n1[6] nvme5n1[5] nvme4n1[4] nvme3n1[3] nvme2n1[2] nvme1n1[1] nvme0n1[0]
      16877177856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      bitmap: 0/14 pages [0KB], 65536KB chunk

md127 : active raid5 nvme18n1[9] nvme17n1[7] nvme16n1[6] nvme15n1[5] nvme14n1[4] nvme13n1[3] nvme12n1[2] nvme11n1[1] nvme10n1[0]
      15001935872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [9/9] [UUUUUUUUU]
      bitmap: 0/14 pages [0KB], 65536KB chunk

unused devices: <none>

[root@hp-dl325 queue]# cat /etc/udev/rules.d/99-local.rules
SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="md*", ATTR{md/sync_speed_max}="2000000",ATTR{md/group_thread_cnt}="16", ATTR{md/stripe_cache_size}="1024" ATTR{queue/nomerges}="2", ATTR{queue/nr_requests}="1023", ATTR{queue/rotational}="0", ATTR{queue/rq_affinity}="2", ATTR{queue/scheduler}="none", ATTR{queue/add_random}="0", ATTR{queue/max_sectors_kb}="4096"
SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="nvme*[0-9]n*[0-9]", ATTRS{model}=="*PM1725a*", ATTR{queue/nomerges}="2", ATTR{queue/nr_requests}="1023", ATTR{queue/rotational}="0", ATTR{queue/rq_affinity}="2", ATTR{queue/scheduler}="none", ATTR{queue/add_random}="0", ATTR{queue/max_sectors_kb}="4096"



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux