Re: Btrfs defragmentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 05/06/15 20:07, Timofey Titovets wrote:
> 2015-05-06 20:51 GMT+03:00 Lionel Bouton <lionel+ceph@xxxxxxxxxxx>:
>> Is there something that would explain why initially Btrfs creates the
>> 4MB files with 128k extents (32 extents / file) ? Is it a bad thing for
>> performance ?
> This kind of behaviour is a reason why i ask you about compression.
> "You can use filefrag to locate heavily fragmented files (may not work
> correctly with compression)."
> https://btrfs.wiki.kernel.org/index.php/Gotchas
>
> Filefrag show each compressed chunk as separated extents, but he can
> be located linear. This is a problem in file frag =\

Hum, I see. This could explain why we rarely see the number of extents
go down. When data is replaced with incompressible data Btrfs must
deactivate compression and be able to reduce the number of extents.

This should not have much impact on the defragmentation process and
performance: we check for extents being written sequentially next to
each other and don't count this as a cost for file access. This is why
these files aren't defragmented even if we ask for it and our tool
reports a low overhead for them.

Best regards,

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux