https://bugzilla.kernel.org/show_bug.cgi?id=217965 --- Comment #22 from Eyal Lebedinsky (bugzilla@xxxxxxxxxxxxxx) --- I now ran a test on 6.4.15. I copied a segment of the tree I have issues with. 7.2GB in 77,972 files. It took just over 9 minutes to sync the drain the dirty cache, and the array run like this (I am watching the kB_wrtn/s column): Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd 18:15:01 md127 423.00 4.00 7001.20 0.00 40 70012 0 18:15:11 md127 112.30 14.80 9689.60 0.00 148 96896 0 18:15:21 md127 28.70 3.60 964.80 0.00 36 9648 0 18:15:31 md127 321.90 4.00 1340.40 0.00 40 13404 0 18:15:41 md127 18.20 0.00 123.20 0.00 0 1232 0 18:15:51 md127 26.80 10.00 2062.00 0.00 100 20620 0 18:16:01 md127 2.60 0.00 56.40 0.00 0 564 0 18:16:11 md127 2.80 0.00 82.40 0.00 0 824 0 18:16:21 md127 6.10 0.00 150.80 0.00 0 1508 0 18:16:31 md127 86.00 16.80 4978.00 0.00 168 49780 0 18:16:41 md127 48.90 10.80 4505.60 0.00 108 45056 0 18:16:51 md127 36.10 6.80 1498.00 0.00 68 14980 0 18:17:01 md127 3.20 0.40 64.40 0.00 4 644 0 18:17:11 md127 3.40 0.80 84.80 0.00 8 848 0 18:17:21 md127 3.70 0.80 283.60 0.00 8 2836 0 18:17:31 md127 32.90 6.00 1950.00 0.00 60 19500 0 18:17:41 md127 907.50 24.00 15638.40 0.00 240 156384 0 18:17:51 md127 698.00 9.60 10476.40 0.00 96 104764 0 18:18:01 md127 146.80 0.40 4796.00 0.00 4 47960 0 18:18:11 md127 163.00 0.80 1328.80 0.00 8 13288 0 18:18:21 md127 149.50 1.20 1140.00 0.00 12 11400 0 18:18:31 md127 46.50 1.60 545.20 0.00 16 5452 0 18:18:41 md127 335.90 14.40 9770.00 0.00 144 97700 0 18:18:51 md127 119.90 4.40 3868.80 0.00 44 38688 0 18:19:01 md127 315.70 25.60 11422.80 0.00 256 114228 0 18:19:11 md127 285.80 1.20 4142.40 0.00 12 41424 0 18:19:21 md127 239.00 4.40 3888.00 0.00 44 38880 0 18:19:31 md127 116.30 1.20 1752.80 0.00 12 17528 0 18:19:41 md127 166.60 8.00 8886.00 0.00 80 88860 0 18:19:51 md127 357.20 5.60 6146.80 0.00 56 61468 0 18:20:01 md127 118.10 1.20 1670.40 0.00 12 16704 0 18:20:11 md127 66.00 6.40 802.80 0.00 64 8028 0 18:20:21 md127 139.40 4.40 1641.20 0.00 44 16412 0 18:20:31 md127 44.40 2.40 632.40 0.00 24 6324 0 18:20:41 md127 183.50 13.60 4048.40 0.00 136 40484 0 18:20:51 md127 114.40 28.40 8346.40 0.00 284 83464 0 18:21:01 md127 152.20 27.20 15576.00 0.00 272 155760 0 18:21:11 md127 75.30 1.60 1146.40 0.00 16 11464 0 18:21:21 md127 236.20 18.40 13491.20 0.00 184 134912 0 18:21:31 md127 226.70 3.60 3237.60 0.00 36 32376 0 18:21:41 md127 152.70 4.00 2048.00 0.00 40 20480 0 18:21:51 md127 92.10 7.60 1907.20 0.00 76 19072 0 18:22:01 md127 142.50 8.00 1921.60 0.00 80 19216 0 18:22:11 md127 137.30 5.60 2307.60 0.00 56 23076 0 18:22:21 md127 124.10 6.00 1511.20 0.00 60 15112 0 18:22:31 md127 105.90 4.00 1888.80 0.00 40 18888 0 18:22:41 md127 166.30 2.00 2073.20 0.00 20 20732 0 18:22:51 md127 649.90 6.80 10615.20 0.00 68 106152 0 18:23:01 md127 1253.70 8.80 175698.40 0.00 88 1756984 0 18:23:11 md127 88.40 2.00 66840.40 0.00 20 668404 0 18:23:21 md127 0.00 0.00 0.00 0.00 0 0 0 Is writing at a few MB/s reasonable for an array with disks that can top 200MB/s? Note the burst at the end when a significant chunk of the data was written out. Now running on 6.5.10 I see a very different story: Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd 18:45:53 md127 2.60 0.00 10.40 0.00 0 104 0 18:46:03 md127 5.40 0.00 21.60 0.00 0 216 0 18:46:13 md127 5.40 0.00 21.60 0.00 0 216 0 18:46:23 md127 5.30 0.00 21.60 0.00 0 216 0 18:46:33 md127 5.20 0.00 20.80 0.00 0 208 0 18:46:43 md127 5.30 0.00 23.60 0.00 0 236 0 18:46:53 md127 5.10 0.00 44.00 0.00 0 440 0 18:47:03 md127 2.70 0.00 21.60 0.00 0 216 0 18:47:13 md127 5.30 0.00 26.80 0.00 0 268 0 18:47:23 md127 5.40 0.00 25.20 0.00 0 252 0 18:47:33 md127 5.20 0.00 21.60 0.00 0 216 0 18:47:43 md127 5.30 0.00 21.20 0.00 0 212 0 18:47:53 md127 5.30 0.00 22.40 0.00 0 224 0 18:48:03 md127 5.40 0.00 24.00 0.00 0 240 0 18:48:13 md127 2.60 0.00 11.20 0.00 0 112 0 18:48:23 md127 5.30 0.00 22.40 0.00 0 224 0 18:48:33 md127 5.30 0.00 23.20 0.00 0 232 0 18:48:43 md127 5.20 0.00 24.40 0.00 0 244 0 18:48:53 md127 5.30 0.00 22.00 0.00 0 220 0 18:49:03 md127 6.20 3.60 23.20 0.00 36 232 0 18:49:13 md127 3.40 0.00 20.80 0.00 0 208 0 18:49:23 md127 5.20 0.00 20.80 0.00 0 208 0 18:49:33 md127 5.30 0.00 22.00 0.00 0 220 0 18:49:43 md127 5.30 0.00 21.60 0.00 0 216 0 18:49:53 md127 5.20 0.00 33.20 0.00 0 332 0 I expect this to take much longer to complete, possibly hours (20% drained in the last 30m). In both cases the flusher thread was running at 100%CPU. I do have the full logs though. HTH -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug.