Re: Deleting files with extended attributes is dead slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 17, 2011 at 07:39:01PM +0200, Bernd Schubert wrote:
> On 08/17/2011 07:02 PM, Christoph Hellwig wrote:
> >On Wed, Aug 17, 2011 at 03:05:28PM +0200, Bernd Schubert wrote:
> >>>(squeeze-x86_64)fslab2:~# xfs_bmap -a /mnt/xfs/Bonnie.29243/00000/00000027faJxifNb0n
> >>>/mnt/xfs/Bonnie.29243/00000/00000027faJxifNb0n:
> >>>        0: [0..7]: 92304..92311
> >>
> >>(Sorry, I have no idea what "0: [0..7]: 92304..9231" is supposed to
> >>tell me).
> >
> >It means that you are having an extent spanning 8 blocks for xattr
> >storage, that map to physical blocks 92304 to 9231 in the filesystem.
> >
> >It sounds to me like your workload has a lot more than 256 bytes of
> >xattrs, or the underlying code is doing something rather stupid.
> 
> Well, the workload I described here is a controlled bonnie test, so
> there cannot be more than 256 bytes (unless there is a bug in the
> code, will double check later on).
> 
> >
> >>Looking at 'top' and 'iostat -x' outout, I noticed we are actually
> >>not limited by io to disk, but CPU bound. If you should be
> >>interested, I have attached 'perf record -g' and 'perf report -g'
> >>outout, of the bonnie file create (create + fsetfattr() ) phase.
> >
> >It's mostly spending a lot of time on copying things into the CIL
> >buffers, which is expected and intentional as that allows for additional
> >parallelity.  I you'd switch the workload to multiple intances doing
> >the create in parallel you should be able to scale to better numbers.
> 
> I just tried to bonnies in parallel and that didn't improve
> anything. FhGFS code has several threads anyway. But it would be
> good, if the underlying file system wouldn't take all the CPU
> time...

XFS directory algorithms are significantly more complex than ext4.
They trade off CPU usage for significantly better layout and
scalability at large sizes. i.e. CPU costs less than IO so we burn
more CPU to reduce IO. You don't see the benefits of that until
directories start to get large (e.g. > 100k entries) and you are
doing cold cache lookups.

> >>xfs:
> >>mkfs.xfs -f -i size=512 -i maxpct=90  -l lazy-count=1 -n size=64k /dev/sdd

What is the output of this command?

> >Do 64k dir blocks actually help you with the workload?  They also tend
> 
> Also just tested, with or without doesn't improve anything.

Right, 64k directory blocks make a difference on cold cache
traversals and lookups by flattening the btrees. They also make a
difference in create/unlink performance once you get over a few
million files in the one directory (once again due to reduced IO).

> >to do a lot of useless memcpys in their current form, although these
> >didn't show up on your profile.  Did you try using a larger inode size
> >as suggested in my previous mail?
> 
> I just tried and now that I understand the xfs_bmap output, it is
> interesting to see, that an xattr size up to 128 byte does not need
> an extent + blocks, but 256 byte do have one extent and 8 blocks
> even with an inode size of 2K. xfs_info tells me that isize=2048 was
> accepted. I didn't test any sizes in between 128 and 256 byte yet.
> Now while I can set the data/xattr size for the bonnie test to less
> than 256 byte, that is not so easy with our real target FhGFS ;)

That tells me your filesystem is either not using dynamic attribute
fork offsets or that code is broken. The output of the above mkfs
command will tell us what attribute fork behaviour is expected, so 
which of the two cases you are seeing.

Also, what kernel are you testing on?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux