Hi, With filesystems like ext4, xfs and btrfs, what are the limits on directory capacity, and how well are they indexed? The reason I ask is that inside of cachefiles, I insert fanout directories inside index directories to divide up the space for ext2 to cope with the limits on directory sizes and that it did linear searches (IIRC). For some applications, I need to be able to cache over 1M entries (render farm) and even a kernel tree has over 100k. What I'd like to do is remove the fanout directories, so that for each logical "volume"[*] I have a single directory with all the files in it. But that means sticking massive amounts of entries into a single directory and hoping it (a) isn't too slow and (b) doesn't hit the capacity limit. David [*] What that means is netfs-dependent. For AFS it would be a single volume within a cell; for NFS, it would be a particular FSID on a server, for example. Kind of corresponds to a thing that gets its own superblock on the client.