On Wed, Mar 27, 2024 at 4:09 PM Barry Song <21cnbao@xxxxxxxxx> wrote: > > On Wed, Mar 27, 2024 at 8:06 AM Kairui Song <ryncsn@xxxxxxxxx> wrote: > > > > From: Kairui Song <kasong@xxxxxxxxxxx> > > > > Interestingly the major performance overhead of synchronous is actually > > from the workingset nodes update, that's because synchronous swap in > > keeps adding single folios into a xa_node, making the node no longer > > a shadow node and have to be removed from shadow_nodes, then remove > > the folio very shortly and making the node a shadow node again, > > so it has to add back to the shadow_nodes. > > Hi Kairui, > > Thank you for clarifying this. I'm unsure how it relates to SWP_SYNCHRONOUS_IO. > Does this observation apply universally to all instances where > __swap_count(entry) > == 1, even on devices not using SYNCHRONOUS_IO? Hi Barry I was testing using zero pages on ZRAM so the performance issue is much more obvious. For non SYNCHRONOUS_IO devices, they don't drop swap cache immediately unless swap is half full, so a shadow node will be removed from shadow_nodes on first swapin, but usually won't be added/removed repeatedly. I think the logic that "never drop swapcache even if swap count is 1", then suddenly switch to "always drop swap cache when swap count is 1" when swap is half full is not a good solution... Maybe some generic optimization can be applied for that part too.