Re: Ceph fs stability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 02.08.2012 19:10, schrieb Niko!:
> Hi!
> 
> we are using ceph 0.48 on three nodes to provide rbd images for further four kvm
> nodes (not kernel mapped) with no big issues and would mount the ceph fs on the
> kvm nodes just to store xml virtual machine definitions in order to have them
> immediately in the case an host crashes (we are in multimds configuration). As
> ceph fs is not production ready what are the possible problems? May the fs
> corrupt the rbd pool or damages will be limited only to the data/metadata pools
> (acceptable for us)? may the fs hang the entire cluster or kvm client nodes?
> 



Don't use it for now, it is too buggy - you sometimes cannot delete files or
rename them, or the fuse client crashes..


what we do instead is use a mapped rbd device with ocfs2 - works stable.

-- 

Mit freundlichen Grüßen,

Florian Wiessner

Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila

fon.: +49 9282 9638 200
fax.: +49 9282 9638 205
24/7: +49 900 144 000 00 - 0,99 EUR/Min*
http://www.smart-weblications.de

--
Sitz der Gesellschaft: Naila
Geschäftsführer: Florian Wiessner
HRB-Nr.: HRB 3840 Amtsgericht Hof
*aus dem dt. Festnetz, ggf. abweichende Preise aus dem Mobilfunknetz
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux