On May 27, joellilienkamp@xxxxxxxxxxxx wrote: > We are observing a strange performance drop on resync, using the > md/raid5 support in Linux 2.4.20. It is observable on at least two > different disk controllers, so it doesn't appear to be a specific > hardware limitation. > > To reproduce: > > > Create a RAID5 device, /dev/md5, from four partitions, chunk-size > 256k. > > Observe resync performance via 'cat /proc/mdstat'. We are seeing > around 30MB/Sec on one controller, and 50+MB/sec on another. > > Do a small read from the device, such as 'dd if=/dev/md5 > of=/dev/null bs=1024 count=1' > > Re-observe resync performance via 'cat /proc/mdstat'. We are seeing > it drop by 60%+, to 12MB/Sec on one controller, and 15MB/sec on the > other. You will probably also see a message in the kernel logs like: raid5: switching cache buffer size, 4096 --> 1024 The raid5 stripe cache must match the request size used by any client. It is PAGE_SIZE at start up, but changes whenever is sees a request of a difference size. Reading from /dev/mdX uses a request size of 1K. Most filesystems use a request size of 4k. So, when you do the 'dd', the cache size changes and you get a small performance drop because of this. If you make a filesystem on the array and then mount it, it will probably switch back to 4k requests and resync should speed up. NeilBrown > > The performance never recovers to its higher level, short of a reboot or > of reconstructing the device. I do not observe similar issues with > RAID1, though I have not worked that as thoroughly. > > > I have scoured the various lists and postings to find anything about > this, but haven't located anything. As always, any help is greatly > appreciated. > > > Joel Lilienkamp > - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html