oops, mangled the first part of that reply a bit. Need my morning
coffee. :)
On 01/30/2015 07:56 AM, Mark Nelson wrote:
About a year ago I was talking to j
On 01/30/2015 07:24 AM, Luke Kao wrote:
Dear ceph users,
Has anyone tried to add autodefrag and mount option when use btrfs as
the osd storage?
Sort of. About a year ago I was looking into it, but Josef told me not
to use either defrag or autodefrag. (especially when lots of snapshots
are used) There is/was a bug that can make the box go OOM and keel over.
I think fixing it was on the roadmap but I haven't heard if anything
ever made it in.
In some previous discussion that btrfs osd startup becomes very slow
after used for some time, just thinking about add autodefrag will help.
We will add on our test cluster first to see if there is any difference.
Please kindly share experience if available, thanks
With OSDs on BTRFS, we saw better performance across the board vs XFS
initially on a fresh deploy. After ~30 minutes of small random writes
to RBD volumes, everything got incredibly fragmented and sequential
reads degraded by about 200%. Presumably this is due to COW. Even if
defrag was safe, there'd be a lot of data to clean up...
Luke Kao
MYCOM OSI
------------------------------------------------------------------------
This electronic message contains information from Mycom which may be
privileged or confidential. The information is intended to be for the
use of the individual(s) or entity named above. If you are not the
intended recipient, be aware that any disclosure, copying, distribution
or any other use of the contents of this information is prohibited. If
you have received this electronic message in error, please notify us by
post or telephone (to the numbers or correspondence address above) or by
email (at the email address above) immediately.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com