On 1/27/14, 1:39 PM, Theodore Ts'o wrote: >> It will depend on the length of the filenames. But by my calculations, >> for average 28-char filenames, it's closer to 30 million. > > Note that there will be some very significant performance problems > well before a directory gets that big. For example, just simply doing > a readdir + stat on all of the files in that directory (or a readdir + > unlink, etc.) will very likely result in extremely unacceptable > performance. Yep, that's the max possible, not the max useable. ;) (Although, I'm not sure in practice what max useable looks like, TBH). -Eric > So if you can find some other way of avoiding allowing the file system > that big (i.e., using a real database instead of trying to use a file > system as a database, etc.), I'd strongly suggest that you consider > those alternatives. > > Regards, > > - Ted > -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html