FWIW, from purely a performance perspective Ceph usually looks pretty
fantastic on a fresh BTRFS filesystem. In fact it will probably
continue to look great until you do small random writes to large objects
(like say to blocks in an RBD volume). Then COW starts fragmenting the
objects into oblivion. I've seen sequential read performance drop by
300% after 5 minutes of 4K random writes to the same RBD blocks.
Autodefrag might help. A long time ago I recall Josef told me it was
dangerous to use (I think it could run the node out of memory and
corrupt the FS), but it may be that it's safer now. In any event we
don't really do a lot of testing with BTRFS these days as bluestore is
indeed the next gen OSD backend. If you do decide to give either BTRFS
or ZFS a go with filestore, let us know how it goes. ;)
Mark
On 03/18/2016 02:42 PM, Heath Albritton wrote:
Neither of these file systems is recommended for production use underlying an OSD. The general direction for ceph is to move away from having a file system at all.
That effort is called "bluestore" and is supposed to show up in the jewel release.
-H
On Mar 18, 2016, at 11:15, Schlacta, Christ <aarcane@xxxxxxxxxxx> wrote:
Insofar as I've been able to tell, both BTRFS and ZFS provide similar
capabilities back to CEPH, and both are sufficiently stable for the
basic CEPH use case (Single disk -> single mount point), so the
question becomes this: Which actually provides better performance?
Which is the more highly optimized single write path for ceph? Does
anybody have a handful of side-by-side benchmarks? I'm more
interested in higher IOPS, since you can always scale-out throughput,
but throughput is also important.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com