I am using around 6 md devices now, each with 16-18 drives, and after switching to devfs (huge long device names), I was taking segfault panics and other procs randomly blowing up sometimes... With all this stuff on, a single read (cat /proc/mdstat) was returning just a few bytes over 4096... I quick code read of the /proc filesystem code (for reads) seems to have not much checking for buffer overruns.. There seems to be a some size limit of 3 * 1024 per read, but the code that actually performs the read and formatting of a read on /proc/mdstat lives in md.c and I didnt see any checks for running off the end (where is the end?) of a page. This could be clobbering the next physical page in ram, which could be pretty much any virtual page, in the kernel or some random proc and cause crashes at times in the future when this mem was referenced. All this is very prelim. Also, I have noticed when running a shell file doing reads of /proc/mdstat, sleep 1 loop, that doing a raidstop on an array will sometimes cause a seg fault (non fatal), just seg faults the cat proc while reading mdstat in /proc as the array is shutdown.. just FYI .. This is all very prelim, and I haven't had the time to work on smoking gun proof or patches yet... For all I know, some of you may have found and fixed these by now... --ghg - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html