On 10/04/2007 05:11 PM, David Miller wrote: > From: Chuck Ebbert <cebbert@xxxxxxxxxx> > Date: Thu, 04 Oct 2007 17:02:17 -0400 > >> How do you simulate reading 100TB of data spread across 3000 disks, >> selecting 10% of it using some criterion, then sorting and >> summarizing the result? > > You repeatedly read zeros from a smaller disk into the same amount of > memory, and sort that as if it were real data instead. You've just replaced 3000 concurrent streams of data with a single stream. That won't test the memory allocator's ability to allocate memory to many concurrent users very well. - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html