CephFS roadmap (was Re: NAS on RBD)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 9 Sep 2014, Blair Bethwaite wrote:
> > Personally, I think you?re very brave to consider running 2PB of ZoL 
> > on RBD. If I were you I would seriously evaluate the CephFS option. It 
> > used to be on the roadmap for ICE 2.0 coming out this fall, though I 
> > noticed its not there anymore (??!!!).
> 
> Yeah, it's very disappointing that this was silently removed. And it's
> particularly concerning that this happened post RedHat acquisition.
> I'm an ICE customer and sure would have liked some input there for
> exactly the reason we're discussing.

A couple quick comments:

1) We have more developers actively working on CephFS today than we have 
ever had before.  It is a huge priority for me and the engineering team to 
get it into a state where it is ready for general purpose production 
workloads.

2) As a scrappy startup like Inktank we were very fast and loose about 
what went into the product roadmap and what claims we made.  Red Hat is 
much more cautious about forward looking statements in their enterprise 
products.  Do not read too much into the presence or non-presence of 
CephFS in the ICE roadmap.  Also note that Red Hat Storage today is 
shipping a fully production-ready and stable distributed file system 
(GlusterFS).

3) We've recently moved to CephFS in the sepia QA lab for archiving all of 
our test results.  This dogfooding exercise has helped us identify several 
general usability and rough edges that have resulted in changes for giant.  
We identified and fixed two kernel client bugs that went into 3.16 or 
thereabouts.  The biggest problem we had we finally tracked down and 
turned out to be an old bug due to an old kernel client that we forgot was 
mounting the cluster.  Overall, I'm pretty pleased.  CephFS in Giant is 
going to be pretty good.  We are still lacking fsck, so be careful, and 
there are several performance issues we need to address, but I encourage 
anyone who is interested to give Giant CepHFS a go in any environment you 
have were you can tolerate the risk.  We are *very* keen to get feedback 
on performance, stability, robustness, and usability.

Thanks!
sage


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux