Re: [PATCH 1/3] zram: allow user to set QUEUE_FLAG_NOWAIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jens/Christoph,

On 5/12/23 18:06, Chaitanya Kulkarni wrote:
> On 5/12/23 07:34, Jens Axboe wrote:
>> On 5/12/23 8:31 AM, Christoph Hellwig wrote:
>>> On Fri, May 12, 2023 at 01:29:56AM -0700, Chaitanya Kulkarni wrote:
>>>> Allow user to set the QUEUE_FLAG_NOWAIT optionally using module
>>>> parameter to retain the default behaviour. Also, update respective
>>>> allocation flags in the write path. Following are the performance
>>>> numbers with io_uring fio engine for random read, note that device has
>>>> been populated fully with randwrite workload before taking these
>>>> numbers :-
>>> Why would you add a module option, except to make everyones life hell?
>> Yeah that makes no sense. Either the driver is nowait compatible and
>> it should just set the flag, or it's not.
>>
> send v2 without modparam.
>
> -ck

Removed modparam v2 is ready to send, but I've few  concerns enabling
nowait unconditionally for zram :-

 From brd data [1] and zram data [2] from my setup :-

         IOPs  (old->new)    | sys cpu% (old->new)
--------------------------------------------------
brd  | 1.5x (3919 -> 5874) | 3x (29 -> 87)
zram | 1.09x ( 29 ->   87) | 9x (11 -> 97)

brd:-
IOPs increased by               ~1.5  times (50% up)
sys CPU percentage increased by ~3.0  times (200% up)

zram:-
IOPs increased by               ~1.09 times (  9% up)
sys CPU percentage increased by ~8.81 times (781% up)

This comparison clearly demonstrates that zram experiences a much more
substantial CPU load relative to the increase in IOPs compared to brd.
Such a significant difference might suggest a potential CPU regression
in zram ?

Especially for zram, if applications are not expecting this high cpu
usage then they we'll get regression reports with default nowait
approach. How about we avoid something like this with one of the
following options ?

1. Provide a fix with module parameter. (Already NACKed).
2. Allow user to configure nowait from command line using zramctl.
Set QUEUE_FLAG_NOWAIT disabled by default.
3. Add a block layer generic sysfs attr nowait like nomerges, since
    similar changes I've posted for pmem [3] and bcache [4] have same
    issue. This generic way we will avoid duplicating code in
    driver and allows user freedom to set or unset based on their
Set QUEUE_FLAG_NOWAIT disabled by default.

or please suggest any other way you guys can think is appropriate ...

-ck

[1] brd nowait off vs nowait on :-

linux-block (zram-nowait) #  grep cpu  brd-*fio | column -t

brd-default-nowait-off-1.fio:  cpu  :  usr=6.34%,   sys=29.84%, 
ctx=216249754,
brd-default-nowait-off-2.fio:  cpu  :  usr=6.41%,   sys=29.83%, 
ctx=217773657,
brd-default-nowait-off-3.fio:  cpu  :  usr=6.37%,   sys=30.05%, 
ctx=222667703,

brd-nowait-on-1.fio:           cpu  :  usr=10.18%,  sys=88.35%, ctx=23221,
brd-nowait-on-2.fio:           cpu  :  usr=10.02%,  sys=86.82%, ctx=22396,
brd-nowait-on-3.fio:           cpu  :  usr=10.17%,  sys=86.29%, ctx=22207,

linux-block (zram-nowait) #  grep IOPS  brd-*fio | column -t

brd-default-nowait-off-1.fio:  read:  IOPS=3872k,  BW=14.8GiB/s
brd-default-nowait-off-2.fio:  read:  IOPS=3933k,  BW=15.0GiB/s
brd-default-nowait-off-3.fio:  read:  IOPS=3953k,  BW=15.1GiB/s

brd-nowait-on-1.fio:           read:  IOPS=5884k,  BW=22.4GiB/s
brd-nowait-on-2.fio:           read:  IOPS=5870k,  BW=22.4GiB/s
brd-nowait-on-3.fio:           read:  IOPS=5870k,  BW=22.4GiB/s

linux-block (zram-nowait) #  grep slat  brd-*fio | column -t

brd-default-nowait-off-1.fio:  slat  (nsec):  min=440, max=56072k,  
avg=9579.17
brd-default-nowait-off-2.fio:  slat  (nsec):  min=440, max=42743k,  
avg=9468.83
brd-default-nowait-off-3.fio:  slat  (nsec):  min=431, max=32493k,  
avg=9532.96

brd-nowait-on-1.fio:           slat  (nsec):  min=1523, max=37786k,  
avg=7596.58
brd-nowait-on-2.fio:           slat  (nsec):  min=1503, max=40101k,  
avg=7612.64
brd-nowait-on-3.fio:           slat  (nsec):  min=1463, max=37298k,  
avg=7610.89

[2] zram nowait off vs nowait on:-

linux-block (zram-nowait) #  grep IOPS  zram-*fio | column -t
zram-default-nowait-off-1.fio:  read:  IOPS=833k,  BW=3254MiB/s
zram-default-nowait-off-2.fio:  read:  IOPS=845k,  BW=3301MiB/s
zram-default-nowait-off-3.fio:  read:  IOPS=845k,  BW=3301MiB/s

zram-nowait-on-1.fio:           read:  IOPS=917k,  BW=3582MiB/s
zram-nowait-on-2.fio:           read:  IOPS=914k,  BW=3569MiB/s
zram-nowait-on-3.fio:           read:  IOPS=917k,  BW=3581MiB/s

linux-block (zram-nowait) #  grep cpu  zram-*fio | column -t
zram-default-nowait-off-1.fio:  cpu  :  usr=5.18%,  sys=11.31%, ctx=39945072
zram-default-nowait-off-2.fio:  cpu  :  usr=5.20%,  sys=11.49%, ctx=40591907
zram-default-nowait-off-3.fio:  cpu  :  usr=5.31%,  sys=11.86%, ctx=40252142

zram-nowait-on-1.fio:           cpu  :  usr=1.87%,  sys=97.05%, ctx=24337
zram-nowait-on-2.fio:           cpu  :  usr=1.83%,  sys=97.20%, ctx=21452
zram-nowait-on-3.fio:           cpu  :  usr=1.84%,  sys=97.20%, ctx=21051

linux-block (zram-nowait) #  grep slat  zram-*fio | column -t
zram-default-nowait-off-1.fio:  slat  (nsec):  min=420, max=6960.6k,  
avg=1859.09
zram-default-nowait-off-2.fio:  slat  (nsec):  min=411, max=5387.7k,  
avg=1848.79
zram-default-nowait-off-3.fio:  slat  (nsec):  min=410, max=6914.2k,  
avg=1902.51

zram-nowait-on-1.fio:           slat  (nsec):  min=1092, max=32268k,   
avg=51605.49
zram-nowait-on-2.fio:           slat  (nsec):  min=1052, max=7676.3k,  
avg=51806.82
zram-nowait-on-3.fio:           slat  (nsec):  min=1062, max=10444k,   
avg=51625.89

[3] 
https://lore.kernel.org/nvdimm/b90ff1ba-b22c-bb37-db0a-4c46bb5f2a06@xxxxxxxxxx/T/#t
[4] https://marc.info/?l=linux-bcache&m=168388522305946&w=1
https://marc.info/?l=linux-bcache&m=168401821129652&w=2






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux