Hello, I am facing a performance regression on squashfs. There are many squashfs partitions on our board. I am doing the operations below on 90 squashfs partitions: "for cnt in $(seq 0 9); do echo 3 > /proc/sys/vm/drop_caches; echo "Loop ${cnt}:"; time -v find /squashfs/part[0-9][0-9] | xargs -P 24 -i cat {} > /dev/null 2>/dev/null; echo ""; done" On linux 4.18, I got the elapsed time statistics below with command above(run find/xargs/cat commands 10 times): 1:22.80 (1m + 22.80s) 0:59.76 1:01.43 1:02.48 1:03.03 1:02.92 1:03.19 1:03.22 1:03.26 1:03.14 On linux 5.10, huge performance regression: 5:48.69 (5m + 48.69s) 5:52.99 6:06.30 6:01.43 5:50.08 6:26.59 6:09.98 6:04.72 6:05.21 6:21.49 By "git bisect", I found this regression is related to readahead. After reverting c1f6925e1091 ("mm: put readahead pages in cache earlier") and 8151b4c8bee4 ("mm: add readahead address space operation") on linux 5.10, the performance is improved: 1:37.16 (1m + 37.16s) 1:04.18 1:05.28 1:06.07 1:06.31 1:06.58 1:06.80 1:06.79 1:06.95 1:06.61 Also, I found disabling readahead is helpful with 9eec1d897139 ("squashfs: provide backing_dev_info in order to disable read-ahead"): 1:06.18 (1m + 6.18s) 1:05.65 1:06.34 1:06.88 1:06.52 1:06.78 1:06.61 1:06.99 1:06.60 1:06.79 I have also tired with the upstream linux 5.18, see the results below: 1:12.82 (1m + 12.82s) 1:07.68 1:08.94 1:09.65 1:09.87 1:10.32 1:10.47 1:10.34 1:10.24 1:10.34 As we can see that even if the readahead disabled, there is still extra 2 ~ 3s overhead than linux 4.18. BTW, the reverted two commits above are from " Change readahead API " series, see the following link: https://lore.kernel.org/all/20200414150233.24495-11-willy@xxxxxxxxxxxxx/T/#m22d6de881c24057b776758ae8e7f5d54e2db8026 . I would appreciate your comments and inputs. Regards, Xiognwei