You are right, Dave. Better with XFS and all inode+attr are stored on the start of the drive with inode32. ~40 K IOPS with ext4 and only 10 K IOPS with XFS. Good. 2016-02-16 4:35 GMT+01:00 Dave Chinner <dchinner@xxxxxxxxxx>: > On Mon, Feb 15, 2016 at 04:18:28PM +0100, David Casier wrote: >> Hi Dave, >> 1TB is very wide for SSD. > > It fills from the bottom, so you don't need 1TB to make it work > in a similar manner to the ext4 hack being described. > >> Exemple with only 10GiB : >> https://www.aevoo.fr/2016/02/14/ceph-ext4-optimisation-for-filestore/ > > It's a nice toy, but it's not something that is going scale reliably > for production. That caveat at the end: > > "With this model, filestore rearrange the tree very > frequently : + 40 I/O every 32 objects link/unlink." > > Indicates how bad the IO patterns will be when modifying the > directory structure, and says to me that it's not a useful > optimisation at all when you might be creating several thousand > files/s on a filesystem. That will end up IO bound, SSD or not. > > Cheers, > > Dave. > -- > Dave Chinner > dchinner@xxxxxxxxxx -- ________________________________________________________ Cordialement, David CASIER 3B Rue Taylor, CS20004 75481 PARIS Cedex 10 Paris Ligne directe: 01 75 98 53 85 Email: david.casier@xxxxxxxx ________________________________________________________ -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html