Re: getdents - ext4 vs btrfs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/3/4 Jacek Luczak <difrost.kernel@xxxxxxxxx>:
> 2012/3/3 Jacek Luczak <difrost.kernel@xxxxxxxxx>:
>> 2012/3/2 Chris Mason <chris.mason@xxxxxxxxxx>:
>>> On Fri, Mar 02, 2012 at 03:16:12PM +0100, Jacek Luczak wrote:
>>>> 2012/3/2 Chris Mason <chris.mason@xxxxxxxxxx>:
>>>> > On Fri, Mar 02, 2012 at 11:05:56AM +0100, Jacek Luczak wrote:
>>>> >>
>>>> >> I've took both on tests. The subject is acp and spd_readdir used with
>>>> >> tar, all on ext4:
>>>> >> 1) acp: http://91.234.146.107/~difrost/seekwatcher/acp_ext4.png
>>>> >> 2) spd_readdir: http://91.234.146.107/~difrost/seekwatcher/tar_ext4_readir.png
>>>> >> 3) both: http://91.234.146.107/~difrost/seekwatcher/acp_vs_spd_ext4.png
>>>> >>
>>>> >> The acp looks much better than spd_readdir but directory copy with
>>>> >> spd_readdir decreased to 52m 39sec (30 min less).
>>>> >
>>>> > Do you have stats on how big these files are, and how fragmented they
>>>> > are?  For acp and spd to give us this, I think something has gone wrong
>>>> > at writeback time (creating individual fragmented files).
>>>>
>>>> How big? Which files?
>>>
>>> All the files you're reading ;)
>>>
>>> filefrag will tell you how many extents each file has, any file with
>>> more than one extent is interesting.  (The ext4 crowd may have better
>>> suggestions on measuring fragmentation).
>>>
>>> Since you mention this is a compile farm, I'm guessing there are a bunch
>>> of .o files created by parallel builds.  There are a lot of chances for
>>> delalloc and the kernel writeback code to do the wrong thing here.
>>>
>>
> [Most of files are B and K size]
>>
>> All files scanned: 1978149
>> Files fragmented: 313 (0.015%) where 11 have 3+ extents
>> Total size of fragmented files: 7GB (~13% of dir size)
>
> BTRFS: Non of files according to filefrag are fragmented - all fit
> into one extent.
>
>> tar cf on fragmented files:
>> 1) time: 7sec
>> 2) sw graph: http://91.234.146.107/~difrost/seekwatcher/tar_fragmented.png
>> 3) sw graph with spd_readdir:
>> http://91.234.146.107/~difrost/seekwatcher/tar_fragmented_spd.png
>> 4) both on one:
>> http://91.234.146.107/~difrost/seekwatcher/tar_fragmented_pure_spd.png
>
> BTRFS: tar on ext4 fragmented files
> 1) time: 6sec
> 2) sw graph: http://91.234.146.107/~difrost/seekwatcher/tar_fragmented_btrfs.png
>
>> tar cf of fragmented files disturbed with [40,50) K files (in total
>> 4373 files). K files before fragmented M files:
>> 1) size: 7.2GB
>> 2) time: 1m 14sec
>> 3) sw graph: http://91.234.146.107/~difrost/seekwatcher/tar_disturbed.png
>> 4) sw graph with spd_readdir:
>> http://91.234.146.107/~difrost/seekwatcher/tar_disturbed_spd.png
>> 5) both on one:
>> http://91.234.146.107/~difrost/seekwatcher/tar_disturbed_pure_spd.png
>
> BTRFS: tar on [40,50) K and ext4 fragmented
> 1) time: 56sec
> 2) sw graph: http://91.234.146.107/~difrost/seekwatcher/tar_disturbed_btrfs.png
>
> New test I've included - randomly selected files:
> - size 240MB
> 1) ext4 (time: 34sec) sw graph:
> http://91.234.146.107/~difrost/seekwatcher/tar_random_ext4.png
> 2) btrfs (time: 55sec) sw graph:
> http://91.234.146.107/~difrost/seekwatcher/tar_random_btrfs.png

Yet another test. The original issue is in the directory data
handling. In my case a lot of dirs are introduced due to extra .svn.
Let's then see how does tar on those dirs looks like.

Number of .svn directories: 61605
1) Ext4:
 - tar time: 10m 53sec
 - sw tar graph: http://91.234.146.107/~difrost/seekwatcher/svn_dir_ext4.png
 - sw tar graph with spd_readdir:
http://91.234.146.107/~difrost/seekwatcher/svn_dir_spd_ext4.png
2) Btrfs:
 - tar time: 4m 35sec
 - sw tar graph: http://91.234.146.107/~difrost/seekwatcher/svn_dir_btrfs.png
 - sw tar graph with ext4:
http://91.234.146.107/~difrost/seekwatcher/svn_dir_btrfs_ext4.png

IMO this is not a writeback issue (well it could be but then it mean
that it broken in general), it's not fragmentation. Sorting files in
readdir helps a bit but is still far behind the btrfs.

Any ideas? Is this a issue or the things are like they are and one
need to live with it.

-Jacek
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux