CephFS roadmap (was Re: NAS on RBD)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

Thanks for weighing into this directly and allaying some concerns.

It would be good to get a better understanding about where the rough
edges are - if deployers have some knowledge of those then they can be
worked around to some extent. E.g., for our use-case it may be that
whilst Inktank/RedHat won't provide support for CephFS that we are
better off using it in a tightly controlled fashion (e.g., no
snapshots, restricted set of native clients acting as presentation
layer with others coming in via SAMBA & Ganesha, no dynamic metadata
tree/s, ???) where we're less likely to run into issues.

Related, given there is no fsck, how would one go about backing up the
metadata in order to facilitate DR? Is there even a way for that to
make sense given the decoupling of data & metadata pools...?

Cheers,

On 10 September 2014 03:47, Sage Weil <sweil at redhat.com> wrote:
> On Tue, 9 Sep 2014, Blair Bethwaite wrote:
>> > Personally, I think you?re very brave to consider running 2PB of ZoL
>> > on RBD. If I were you I would seriously evaluate the CephFS option. It
>> > used to be on the roadmap for ICE 2.0 coming out this fall, though I
>> > noticed its not there anymore (??!!!).
>>
>> Yeah, it's very disappointing that this was silently removed. And it's
>> particularly concerning that this happened post RedHat acquisition.
>> I'm an ICE customer and sure would have liked some input there for
>> exactly the reason we're discussing.
>
> A couple quick comments:
>
> 1) We have more developers actively working on CephFS today than we have
> ever had before.  It is a huge priority for me and the engineering team to
> get it into a state where it is ready for general purpose production
> workloads.
>
> 2) As a scrappy startup like Inktank we were very fast and loose about
> what went into the product roadmap and what claims we made.  Red Hat is
> much more cautious about forward looking statements in their enterprise
> products.  Do not read too much into the presence or non-presence of
> CephFS in the ICE roadmap.  Also note that Red Hat Storage today is
> shipping a fully production-ready and stable distributed file system
> (GlusterFS).
>
> 3) We've recently moved to CephFS in the sepia QA lab for archiving all of
> our test results.  This dogfooding exercise has helped us identify several
> general usability and rough edges that have resulted in changes for giant.
> We identified and fixed two kernel client bugs that went into 3.16 or
> thereabouts.  The biggest problem we had we finally tracked down and
> turned out to be an old bug due to an old kernel client that we forgot was
> mounting the cluster.  Overall, I'm pretty pleased.  CephFS in Giant is
> going to be pretty good.  We are still lacking fsck, so be careful, and
> there are several performance issues we need to address, but I encourage
> anyone who is interested to give Giant CepHFS a go in any environment you
> have were you can tolerate the risk.  We are *very* keen to get feedback
> on performance, stability, robustness, and usability.
>
> Thanks!
> sage



-- 
Cheers,
~Blairo


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux