> Maybe you could try removing the limit and see what actually happens when > you set a ridiculously large size.?? Actually, I tried (unintentionally) something similar. The storage I've has 7 RAID-6 arrays, so I tried to increase the stripe_cache_size to 32768 on each array. Of course, while writing to the array... The first 3 have 10, 10 and 9 HDDs, so with the third array about 3.6GiB were allocated out of 4GiB the PC has. At this point the PC was completely unresponsive, but still working, i.e. it was still writing to the array. Also ssh did not answer in time. Nevertheless, it was not dead or locked, just extremely slow, in fact, once the writing finished, the PC was again working as before. I guess there was a lot of swapping going on... In any case, an upper limit seems to be necessary, but it should be consistent with all available RAM. It does not help, it seems, to limit arrays independently, there should a be a "global" limit, so that the _sum_ of the caches does no exceed this limit. Hope this helps, bye, -- piergiorgio -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html