On Thu, Nov 28, 2024 at 05:22:41AM +0100, Mateusz Guzik wrote: > This means that the folio waiting stuff has poor scalability, but > without digging into it I have no idea what can be done. The easy way Actually the easy way is to change: #define PAGE_WAIT_TABLE_BITS 8 to a larger number. > out would be to speculatively spin before buggering off, but one would > have to check what happens in real workloads -- presumably the lock > owner can be off cpu for a long time (I presume there is no way to > store the owner). So ... - There's no space in struct folio to put a rwsem. - But we want to be able to sleep waiting for a folio to (eg) do I/O. This is the solution we have. For the read case, there are three important bits in folio->flags to pay attention to: - PG_locked. This is held during the read. - PG_uptodate. This is set if the read succeeded. - PG_waiters. This is set if anyone is waiting for PG_locked [*] The first thread comes along, allocates a folio, locks it, inserts it into the mapping. The second thread comes along, finds the folio, sees it's !uptodate, sets the waiter bit, adds itself to the waitqueue. The third thread, ditto. The read completes. In interrupt or maybe softirq context, the BIO completion sets the uptodate bit, clears the locked bit and tests the waiter bit. Since the waiter bit is set, it walks the waitqueue looking for waiters which match the locked bit and folio (see folio_wake_bit()). So there's not _much_ of a thundering herd problem here. Most likely the waitqueue is just too damn long with a lot of threads waiting for I/O. [*] oversimplification; don't worry about it.