Re: [RFC PATCH] mm: count zram read/write into PSI_IO_WAIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Zhaoyang Huang writes:
No. Block device related D-state will be counted in via
psi_dequeue(io_wait). What I am proposing here is do NOT ignore the
influence on non-productive time by huge numbers of in-context swap
in/out (zram like). This can help to make IO pressure more accurate
and coordinate with the number of PSWPIN/OUT. It is like counting the
IO time within filemap_fault->wait_on_page_bit_common into
psi_mem_stall, which introduces memory pressure high by IO.

I think part of the confusion here is that the name "io" doesn't really just mean "io", it means "disk I/O". As in, we are targeting real, physical or network disk I/O. Of course, we can only do what's reasonable if the device we're accounting for is layers upon layers eventually leading to a memory-backed device, but _intentionally_ polluting that with more memory-bound accesses doesn't make any sense when we already have separate accounting for memory. Why would anyone want that?

I'm with Johannes here, I think this would actively make memory pressure monitoring less useful. This is a NAK from my perspective as someone who actually uses these things in production.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux