Eric-san, Ted-san; Thank you very much. I am happy now. On 2014/01/28 3:02, Eric Sandeen wrote: > > It will depend on the length of the filenames. But by my calculations, > for average 28-char filenames, it's closer to 30 million. > > There are (4096-32)/8 indices per block, or 508. > There are 2 levels, so 508*508=258064 leaf blocks. > The length of each record for 28 char names would be 32 bytes. > So you can fit 4096/32 = 128 entries per leaf block. > 258064 leaf blocks * 128 entries/bock is 33,032,192 entries. I understand. > I recently made a spreadsheet to calculate this. > I'm not sure if I am doing google docs sharing and protection > correctly, but this might work: > > https://docs.google.com/spreadsheet/ccc?key=0AtdHTZsZ8XoYdE1IUXlDb1RXQkdPM3F4YWpfNGhMbFE&usp=sharing#gid=0 Great! It is useful for us. On 2014/01/28 4:39, Theodore Ts'o wrote: > > Note that there will be some very significant performance problems > well before a directory gets that big. For example, just simply doing > a readdir + stat on all of the files in that directory (or a readdir + > unlink, etc.) will very likely result in extremely unacceptable > performance. Of course, I know that issue. But we have already this directory. $ \ls -f | wc 1933497 1933497 14968002 This is for mail archive. :-( On 2014/01/28 4:48, Eric Sandeen wrote: > > Yep, that's the max possible, not the max useable. ;) Yes, I wanted to know the limitation. Again, Thank you very much. Best Regards, -- Masato minmin Minda <minmin@xxxxxxxxxx> Japan Registry Services Co., Ltd. (JPRS) -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html