There are many things that everybody "knows" about optimizing PHP code. One of them is that one of the most expensive parts of the process is loading code off of disk and compiling it, which is why opcode caches are such a bit performance boost. The corollary to that, of course, is that more files = more IO and therefore more of a performance hit. But... this is 'effin 2010. It's almost bloody 2011. Operating systems are smart. They already have 14 levels of caching built into them from hard drive micro-controller to RAM to CPU cache to OS. I've heard from other people (who should know) that the IO cost of doing a file_exists() or other stat calls is almost non-existent because a modern OS caches that, and with OS-based file caching even reading small files off disk (the size that most PHP source files are) is not as slow as we think. Personally, I don't know. I am not an OS engineer and haven't benchmarked such things, nor am I really competent to do so. However, it makes a huge impact on the way one structures a large PHP program as the performance trade- offs of huge files with massive unused code (that has to be compiled) vs the cost of reading lots of separate files from disk (more IO) is highly dependent on the speed of the aforementioned IO and of compilation. So... does anyone have any actual, hard data here? I don't mean "I think" or "in my experience". I am looking for hard benchmarks, profiling, or writeups of how OS (Linux specifically if it matters) file caching works in 2010, not in 1998. Modernizing what "everyone knows" is important for the general community, and the quality of our code. --Larry Garfield -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php