I figured I would follow proper protocol and put PATCH in the subject line. As stated in my previous email, this patch fixes my problem of kernel OOPS with a large number of software RAID arrays (I have 27). Below is the text from the original mail. ------------------------------------------- With the help of Martin Bligh, Kevin Fleming, and Randy Dunlap, it looks like this problem is related to the large size of the information presented in /proc/mdstat and it overflowing the 4k page boundary. With Kevin's patch from last week and some help from Randy, I patched the md code in Red Hat 2.4.18-26.7.x to use the seq_file interface for mdstat. I've attached the patch. As with Kevin's patch, it touches almost everything in drivers/md, as well as adding the necessary methods to fs/seq_file.c and include/linux/seq_file.h. I'm currently testing raid1 and raid0 and it seems to work well. No panics yet!!! :) I currently have 26 RAID1 arrays and a big RAID0 stripe across that and I'm running some I/O tests on it now to make sure that it is stable. I haven't tested the raid5, linear, or multipath code, so someone might want to test that out before using it in production :) As Kevin indicated in his mail, I can post the patch to a web site if attachments are a problem. Thanks to everyone for their help. Regards, Andy.
Attachment:
md-seq_file-2.4.18-26.7.x.patch
Description: md-seq_file-2.4.18-26.7.x.patch