Re: Ceph RDB, VMs, btrfs, COW, OSD journals, f2fs, SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Right now, none of filesystems whose CoW features can be used by Ceph
(btrfs and zfs in near future) are recommended for production usage
and it makes sense only for filestore mount point, not for journal.
I`m in doubt if there can be any advantages for performance by using
f2fs for journal over raw device target, but there is a worth to
compare it to ext4. F2fs will take great advantage for pure SSD-based
storage and for upcoming kv filestore over existing best practices
with XFS.

Probably f2fs will be able to reduce wearout factor comparing to the
regular discard(), but it should take really long to compare properly,
though I`ll be happy if someone will be able to make such comparison.

So, if you want to get rid of locking behavior operating with huge
snapshots, you may try btrfs, but it fits barely in any
near-production environment. XFS is way more stable, but unfortunate
snapshot deletion(very huge, though) may tear down pool I/O latencies
for a very long period.


On Mon, Mar 3, 2014 at 1:19 AM, Joshua Dotson <josh@xxxxxxxxx> wrote:
> Hello,
>
> If I'm storing large VM images on Ceph RDB, and I have OSD journals on SSD,
> should I _not_ be using a copy on write file system on the OSDs?  I read
> that large VM images don't play well with COW (e.g. btrfs) [1].  Does Ceph
> improve this situation? Would btrfs outperform non-cow filesystems in this
> setting?
>
> Also, I'm considering placing my OSD journals on f2fs-formatted partitions
> on my Samsung SSDs for hardware resiliency (Samsung created both my SSDs and
> f2fs) [2].  F2FS uses copy on write [3].  Has anyone ever tried this?
> Thoughts?
>
> [1] https://wiki.archlinux.org/index.php/Btrfs#Copy-On-Write_.28CoW.29
> [2] https://www.usenix.org/legacy/event/fast12/tech/full_papers/Min.pdf
> [3] http://www.dslreports.com/forum/r27846667-
>
> Thanks,
> Joshua
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux