I just looked at my raid6 check start/ends (before is 5.15.10-200 (fedora), after is 5.16-11-200 (fedora). md14: 7disks before: 2hr20m, 2h19m, 2hr16m, 2h18m,2h34m, 2hr28m, 2h27m after: 5h6m, 4h50m. md15: 7disk before: 3hr14m, after: 7hr24m, 6hr6m,7hr8m. md17: 4disk before: 6hr11m, 6hr36m, 6hr27m, 6hr8m, 6hr16m after: 8hr10m, 7hr, 5hr33m So it appears to have affected the arrays with 4 disks significantly less than my arrays with 7 disks. . On Tue, Mar 8, 2022 at 3:50 PM Song Liu <song@xxxxxxxxxx> wrote: > > On Mon, Mar 7, 2022 at 10:21 AM Larkin Lowrey <llowrey@xxxxxxxxxxxxxxxxx> wrote: > > > > I am seeing a 'check' speed regression between kernels 5.15 and 5.16. > > One host with a 20 drive array went from 170MB/s to 11MB/s. Another host > > with a 15 drive array went from 180MB/s to 43MB/s. In both cases the > > arrays are almost completely idle. I can flip between the two kernels > > with no other changes and observe the performance changes. > > > > Is this a known issue? > > I am not aware of this issue. Could you please share > > mdadm --detail /dev/mdXXXX > > output of the array? > > Thanks, > Song