Re: [PATCH 1/3] zram: allow user to set QUEUE_FLAG_NOWAIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/16/23 3:32?PM, Chaitanya Kulkarni wrote:
> On 5/16/23 13:43, Jens Axboe wrote:
>> On 5/16/23 2:41?PM, Chaitanya Kulkarni wrote:
>>> On 5/16/23 06:08, Sergey Senozhatsky wrote:
>>>> On (23/05/16 05:51), Chaitanya Kulkarni wrote:
>>>>> Removed modparam v2 is ready to send, but I've few  concerns enabling
>>>>> nowait unconditionally for zram :-
>>>>>
>>>>>    From brd data [1] and zram data [2] from my setup :-
>>>>>
>>>>>            IOPs  (old->new)    | sys cpu% (old->new)
>>>>> --------------------------------------------------
>>>>> brd  | 1.5x (3919 -> 5874) | 3x (29 -> 87)
>>>>> zram | 1.09x ( 29 ->   87) | 9x (11 -> 97)
>>>>>
>>>>> brd:-
>>>>> IOPs increased by               ~1.5  times (50% up)
>>>>> sys CPU percentage increased by ~3.0  times (200% up)
>>>>>
>>>>> zram:-
>>>>> IOPs increased by               ~1.09 times (  9% up)
>>>>> sys CPU percentage increased by ~8.81 times (781% up)
>>>>>
>>>>> This comparison clearly demonstrates that zram experiences a much more
>>>>> substantial CPU load relative to the increase in IOPs compared to brd.
>>>>> Such a significant difference might suggest a potential CPU regression
>>>>> in zram ?
>>>>>
>>>>> Especially for zram, if applications are not expecting this high cpu
>>>>> usage then they we'll get regression reports with default nowait
>>>>> approach. How about we avoid something like this with one of the
>>>>> following options ?
>>>> Well, zram performs decompression/compression on the CPU (per-CPU
>>>> crypto streams) for each IO operation, so zram IO is CPU intensive.
>>> and that is exactly I've raised this issue, are you okay with that ?
>>> I'll send V2 with nowait enabled by default ..
>> Did you check that it's actually nowait sane to begin with? I spent
>> literally 30 seconds on that when you sent the first patch, and the
>> partial/sync path does not look kosher.
>>
> 
> I did check for it and it needs a nowait sane preparation in V2 with
> something like this [1] plus returning error with bio_wouldblock_error()

That looks pretty awful... Things like:

> @@ -779,8 +780,9 @@ static void zram_sync_read(struct work_struct *work)
>       struct zram_work *zw = container_of(work, struct zram_work, work);
>       struct bio_vec bv;
>       struct bio bio;
> +    blk_opf_t_nowait = zw->nowait ? REQ_NOWAIT : 0;
> 
> -    bio_init(&bio, zw->zram->bdev, &bv, 1, REQ_OP_READ);
> +    bio_init(&bio, zw->zram->bdev, &bv, 1, REQ_OP_READ | nowait);
>       bio.bi_iter.bi_sector = zw->entry * (PAGE_SIZE >> 9);
>       __bio_add_page(&bio, zw->page, PAGE_SIZE, 0);
>       zw->error = submit_bio_wait(&bio);
>

make absolutely no sense, setting REQ_NOWAIT and then waiting on IO
completion.

> when we end up in read_from_bdev_sync() when it is not called from
> writeback_store(). (rough changes [1])

writeback_store() will never have nowait set. Outside of that, pushing
the nowait handling to the workqueue makes no sense, that flag only
applies for inline issue.

> But after taking performance numbers repeatedly, the main concern I
> failed to resolve is above numbers & default enabling nowait for
> zram ...

It's a choice you make if you do nowait IO, normal block IO would not
change at all. So I think it's fine, but the driver needs to be in a
sane state to support nowait to begin with.

-- 
Jens Axboe




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux