Re: pg scrub and auto repair in hammer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Am 28.06.2016 um 09:43 schrieb Lionel Bouton <lionel-subscription@xxxxxxxxxxx>:
> 
> Hi,
> 
> Le 28/06/2016 08:34, Stefan Priebe - Profihost AG a écrit :
>> [...]
>> Yes but at least BTRFS is still not working for ceph due to
>> fragmentation. I've even tested a 4.6 kernel a few weeks ago. But it
>> doubles it's I/O after a few days.
> 
> BTRFS autodefrag is not working over the long term. That said BTRFS
> itself is working far better than XFS on our cluster (noticeably better
> latencies). As not having checksums wasn't an option we coded and are
> using this:
> 
> https://github.com/jtek/ceph-utils/blob/master/btrfs-defrag-scheduler.rb
> 
> This actually saved us from 2 faulty disk controllers which were
> infrequently corrupting data in our cluster.
> 
> Mandatory too for performance :
> filestore btrfs snap = false

This sounds interesting. For how long you use this method? What kind of workload do you have? How did you measure the performance and latency? What kernel do you use with btrfs?

Greets,
Stefan
> 
> Lionel

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux