Re: How are you using Ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 18.09.2012 04:32, schrieb Sage Weil:
> On Mon, 17 Sep 2012, Tren Blackburn wrote:
>> On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
>> Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx> wrote:
>>>
>>> Hi,
>>>
>>> i use ceph to provide storage via rbd for our virtualization cluster delivering
>>> KVM based high availability Virtual Machines to my customers. I also use it
>>> as rbd device with ocfs2 on top of it for a 4 node webserver cluster as shared
>>> storage - i do this, because unfortunatelly cephfs is not ready yet ;)
>>>
>> Hi Florian;
>>
>> When you say "cephfs is not ready yet", what parts about it are not
>> ready? There are vague rumblings about that in general, but I'd love
>> to see specific issues. I understand multiple *active* mds's are not
>> supported, but what other issues are you aware of?
> 
> Inktank is not yet supporting it because we do not have the QA in place 
> and general hardening that will make us feel comfortable recommending it 
> for customers.  That said, it works pretty well for most workloads.  In 
> particular, if you stay away from the snapshots and multi-mds, you should 
> be quite stable.
> 
> The engineering team here is about to do a bit of a pivot and refocus on 
> the file system now that the object store and RBD are in pretty good 
> shape.  That will mean both core fs/mds stability and features as well as 
> integration efforts (NFS/CIFS/Hadoop).
> 
> 'Ready' is in the eye of the beholder.  There are a few people using the 
> fs successfully in production, but not too many.
> 

I tried it using multiple mds, because without multiple mds there is no
redundancy and the single mds will be SPOF. I noticed things like empty
directories which could not be deleted. It said directory not empty, but it was
empty and could not be deleted. I also noticed kernel panic on 3.2 kernels using
kernel ceph client, or crashes with ceph-fuse. It is somewhat unstable so that i
always had to reboot a node after a while of usage for various reasons
(ceph-fuse crashed and messed up fuse, kernel panic using kernel ceph client,
unable to delete files/dirs, no fsck for fixing things).

Last time i tried was simple untarring kernel tree in cephfs mountpoint - a new
created cephfs and after 10 minutes there where errors like unable to delete
dirs etc. Since i do not know how to reset/reformat only the cephfs part
(without touching rbd!), i stopped testing for now. The last time i tried it i
lost data - the data was not important and i had backups, but i was feeling
uncomfortable now with using cephfs...

I also did not have tried btrfs with ceph since 11/2011 again, because of losing
data after reboots when btrfs dies, the btrfs was unmountable and there was no
fsck so i only could reformat and wait for ceph to rebuild. After a
powerfailure, no btrfs partitions survived and i lost all test data :/


So i think the first thing to be done to cephfs would be to integrate some sort
of fsck and the ability to format only cephfs without losing other rbd
images/data o rados data...



-- 

Mit freundlichen Grüßen,

Florian Wiessner

Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila

fon.: +49 9282 9638 200
fax.: +49 9282 9638 205
24/7: +49 900 144 000 00 - 0,99 EUR/Min*
http://www.smart-weblications.de

--
Sitz der Gesellschaft: Naila
Geschäftsführer: Florian Wiessner
HRB-Nr.: HRB 3840 Amtsgericht Hof
*aus dem dt. Festnetz, ggf. abweichende Preise aus dem Mobilfunknetz
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux