Thanks Lionel, we are using btrfs compression and it's also stable in our cluster.
Currently another minor problem of btrfs fragments is sometimes we see btrfs-transacti process can pause the whole OSD node I/O for seconds, impacting all OSDs on the server. Especially
when doing recovery / backfill.
However, I wonder restart a OSD takes 30minutes may become a problem for maintenance.
I will share if we have any result on testing different settings.
BR,
Luke
From: Lionel Bouton [lionel-subscription@xxxxxxxxxxx]
Sent: Saturday, January 31, 2015 2:29 AM To: Luke Kao; ceph-users@xxxxxxxx Subject: Re: btrfs backend with autodefrag mount option On 01/30/15 14:24, Luke Kao wrote:
We used autodefrag but it didn't help: performance degrades over time. One possibility raised in previous discussions here is that BTRFS's autodefrag isn't smart enough when snapshots are heavily used as is the case with Ceph OSD by default. There are some tunings available that we have yet to test : filestore btrfs snap filestore btrfs clone range filestore journal parallelAll are enabled by default for BTRFS backends. snap is probably the first you might want to disable and check how autodefrag and defrag behave. It might be possible to use snap and defrag, BTRFS was quite stable for us (but all our OSDs are on systems with at least 72GB RAM which have enough CPU power so memory wasn't much of an issue). Best regards, Lionel Bouton This electronic message contains information from Mycom which may be privileged or confidential. The information is intended to be for the use of the individual(s) or entity named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or any other use of the contents of this information is prohibited. If you have received this electronic message in error, please notify us by post or telephone (to the numbers or correspondence address above) or by email (at the email address above) immediately. |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com