On Thu, Feb 15, 2024 at 10:31 AM Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> wrote: > > On Wed, 2024-02-14 at 17:02 -0800, Chris Li wrote: > > We discovered that 1% swap page fault is 100us+ while 50% of > > the swap fault is under 20us. > > > > Further investigation shows that a large portion of the time > > spent in the free_swap_slots() function for the long tail case. > > > > The percpu cache of swap slots is freed in a batch of 64 entries > > inside free_swap_slots(). These cache entries are accumulated > > from previous page faults, which may not be related to the current > > process. > > > > Doing the batch free in the page fault handler causes longer > > tail latencies and penalizes the current process. > > > > When the swap cache slot is full, schedule async free cached > > swap slots in a work queue, before the next swap fault comes in. > > If the next swap fault comes in very fast, before the async > > free gets a chance to run. It will directly free all the swap > > cache in the swap fault the same way as previously. > > > > Testing: > > > > Chun-Tse did some benchmark in chromebook, showing that > > zram_wait_metrics improve about 15% with 80% and 95% confidence. > > > > I recently ran some experiments on about 1000 Google production > > machines. It shows swapin latency drops in the long tail > > 100us - 500us bucket dramatically. > > > > platform (100-500us) (0-100us) > > A 1.12% -> 0.36% 98.47% -> 99.22% > > B 0.65% -> 0.15% 98.96% -> 99.46% > > C 0.61% -> 0.23% 98.96% -> 99.38% > > > > Signed-off-by: Chris Li <chrisl@xxxxxxxxxx> > > Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> Thank you so much for your review. Chris