Re: Deleting files with extended attributes is dead slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/17/2011 07:02 PM, Christoph Hellwig wrote:
On Wed, Aug 17, 2011 at 03:05:28PM +0200, Bernd Schubert wrote:
(squeeze-x86_64)fslab2:~# xfs_bmap -a /mnt/xfs/Bonnie.29243/00000/00000027faJxifNb0n
/mnt/xfs/Bonnie.29243/00000/00000027faJxifNb0n:
        0: [0..7]: 92304..92311

(Sorry, I have no idea what "0: [0..7]: 92304..9231" is supposed to
tell me).

It means that you are having an extent spanning 8 blocks for xattr
storage, that map to physical blocks 92304 to 9231 in the filesystem.

It sounds to me like your workload has a lot more than 256 bytes of
xattrs, or the underlying code is doing something rather stupid.

Well, the workload I described here is a controlled bonnie test, so there cannot be more than 256 bytes (unless there is a bug in the code, will double check later on).


Looking at 'top' and 'iostat -x' outout, I noticed we are actually
not limited by io to disk, but CPU bound. If you should be
interested, I have attached 'perf record -g' and 'perf report -g'
outout, of the bonnie file create (create + fsetfattr() ) phase.

It's mostly spending a lot of time on copying things into the CIL
buffers, which is expected and intentional as that allows for additional
parallelity.  I you'd switch the workload to multiple intances doing
the create in parallel you should be able to scale to better numbers.

I just tried to bonnies in parallel and that didn't improve anything. FhGFS code has several threads anyway. But it would be good, if the underlying file system wouldn't take all the CPU time...


     100:256:256/10 37026  91 +++++ +++ 43691  93 35960  92 +++++ +++ 40708  92
Latency              4328us     765us    2960us     527us     440us    1075us
1.96,1.96,fslab2,1,1313594619,,,,,,,,,,,,,,100,256,256,,10,37026,91,+++++,+++,43691,93,35960,92,+++++,+++,40708,92,,,,,,,4328us,765us,2960us,527us,440us,1075us


xfs:
mkfs.xfs -f -i size=512 -i maxpct=90  -l lazy-count=1 -n size=64k /dev/sdd

Do 64k dir blocks actually help you with the workload?  They also tend

Also just tested, with or without doesn't improve anything.

to do a lot of useless memcpys in their current form, although these
didn't show up on your profile.  Did you try using a larger inode size
as suggested in my previous mail?

I just tried and now that I understand the xfs_bmap output, it is interesting to see, that an xattr size up to 128 byte does not need an extent + blocks, but 256 byte do have one extent and 8 blocks even with an inode size of 2K. xfs_info tells me that isize=2048 was accepted. I didn't test any sizes in between 128 and 256 byte yet. Now while I can set the data/xattr size for the bonnie test to less than 256 byte, that is not so easy with our real target FhGFS ;)


Btw, the create() + fsetfattr() reate with 128B xattr data is between 13000 and 17000, so due to inlining much better than with 256 xattr.


Thanks,
Bernd

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux