Hi, I'm running 2.6.27.35 kernel which oopsed and previously synced softraid array (md3) started to resync. Now the xfs filesystem that was on md3 turned out to be corrupted. So I started xfs_repair while resync was early at few percent. The process took about 10 hours and resync was only at 11%! [==>..................] resync = 11.4% (97538752/855220032) finish=2637544.6min speed=4K/sec xfs_repair running was still running but went only to beginning of phase2 (there are 7 phases total) and at that time was eating 0% cpu and 45% of ram. Process was in S state. Total time 10 hours and I was nowhere near the end. I killed xfs_repair, rebooted machine and: [>....................] resync = 2.9% (24841088/855220032) finish=105.6min speed=130948K/sec after it finished resyncing in +-that time I ran xfs_repair which took 9 minutes to go through all 7 phases. Total time ~115minutes. The question is now why software raid is soo slow when device is accessed with O_DIRECT by xfs_repair? (that's hch guess on what's the problem). Is this bug, expected behaviour? # cat /proc/mdstat Personalities : [raid10] [raid1] md3 : active raid10 sda4[0] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1] 855220032 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] md1 : active raid10 sde2[0] sdb2[5] sda2[4] sdd2[3] sdf2[2] sdc2[1] 6000000 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] md0 : active raid1 sde1[0] sdb1[5] sda1[4] sdd1[3] sdf1[2] sdc1[1] 497856 blocks [6/6] [UUUUUU] md2 : active raid10 sde3[0] sdb3[5] sda3[4] sdd3[3] sdf3[2] sdc3[1] 74991168 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] unused devices: <none> -- Arkadiusz Miśkiewicz PLD/Linux Team arekm / maven.pl http://ftp.pld-linux.org/ -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html