Re: OCFS2 or GFS2 for cluster filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom Verdaat пишет:

>    3. Is anybody doing this already and willing to share their experience?

Relative yes. Before ceph I was use drbd+ocfs2 (with o2cb stack), now both this
servers are inside of VM's with same ocfs2. There are same like over drbd, but
just remeber to turn rbd cache to "off" (RTFM).

I just not solve (while?) problem with VMs reboot on ceph hard works (one of
nodes restart or just recovery, even if size=3 on 3 nodes - data is ready). IMHO
there are OCFS2 internal heartbeat (heartbeat=local) problem, but there are just
my IMHO. I will make tests about this problem soon.

But, even in case of reboot or killing qemu - OCFS2 still clean (usually nothing
fixed by fsck), so data integrity is good.

I not trying GFS2, but this considered slower then OCFS2. Also GFS2 is "too
RedHat's" - there absent in some of distros and hard to install [userspace] self
via large number of dependences.

OCFS2 also present in 2 ways: O2CB stack (internal) and user stack. O2CB is
good, kernel-side, but have no byte range locking feature. User stack have byte
range locking - over userspace support, but required many userspace stuff too
(present at least in Oracle linux or SuSE, but I use Gentoo and simple
heartbeat, no corosync, so I don't want to work too much).

So, if you need no byte-range locking, I suggest to use OCFS2 with simple O2CB
stack.

-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux