> From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On > Behalf Of Matt Hayward > It seems possible that doing this for 20000 files might create a > problem with the length of the parameter... Seems to me also that multiple servers would be in order. You'd need something pretty beefy to swing that many processes and open files. Which OS and file system are you using? The original email mentions posixaid, and assuming linux, I've had better throughput with linux's libaio. Also beware of one process forking that many subprocesses. Not saying that it can't be done, but I'd probably distribute that kind of load over at least 10 initiators, and maybe more. Have you done any back-of-the-envelope calculations for where the bottlenecks might be? A 10g connection at best moves only about 1.2gB/sec (including overhead), which divided by 20k streams gives only about 60kB/sec per stream (have I got that right?) and again, that doesn't account for any overhead. Likewise, a single 3.5" rotating drive generally won't deliver more than about 150 random write IOPS with write cache enabled, and 2/3 to 1/2 that with the cache disabled. Even with large sequential blocks at the user level, with that many streams into a file system, it can take the appearance of almost pure random access. z! -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html