Re: How are you using Ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 18 Sep 2012, Tren Blackburn wrote:
> On Mon, Sep 17, 2012 at 7:32 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
> > On Mon, 17 Sep 2012, Tren Blackburn wrote:
> >> On Mon, Sep 17, 2012 at 5:05 PM, Smart Weblications GmbH - Florian
> >> Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >> >
> >> > Hi,
> >> >
> >> > i use ceph to provide storage via rbd for our virtualization cluster delivering
> >> > KVM based high availability Virtual Machines to my customers. I also use it
> >> > as rbd device with ocfs2 on top of it for a 4 node webserver cluster as shared
> >> > storage - i do this, because unfortunatelly cephfs is not ready yet ;)
> >> >
> >> Hi Florian;
> >>
> >> When you say "cephfs is not ready yet", what parts about it are not
> >> ready? There are vague rumblings about that in general, but I'd love
> >> to see specific issues. I understand multiple *active* mds's are not
> >> supported, but what other issues are you aware of?
> >
> > Inktank is not yet supporting it because we do not have the QA in place
> > and general hardening that will make us feel comfortable recommending it
> > for customers.  That said, it works pretty well for most workloads.  In
> > particular, if you stay away from the snapshots and multi-mds, you should
> > be quite stable.
> With regards to the multi-mds, is that multi-active mds? I have 3
> mds's built, and only 1 active. The others are incase there's a
> failure. Does that scenario work?

Correct.  One active and one (or more) standby is the default (and 
recommended) behavior.  You need to explicitly tell the monitor to make 
multiple MDSs active... don't do that (yet!).

> >
> > The engineering team here is about to do a bit of a pivot and refocus on
> > the file system now that the object store and RBD are in pretty good
> > shape.  That will mean both core fs/mds stability and features as well as
> > integration efforts (NFS/CIFS/Hadoop).
> That's awesome news. The file system component is very important to me.
> 
> >
> > 'Ready' is in the eye of the beholder.  There are a few people using the
> > fs successfully in production, but not too many.
> I'll keep you up to date as our testing builds out! I'm in the process
> of building out a new test cluster.

Great!

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux