Re: the state of cephfs in giant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 14 Oct 2014, Amon Ott wrote:
> Am 13.10.2014 20:16, schrieb Sage Weil:
> > We've been doing a lot of work on CephFS over the past few months. This
> > is an update on the current state of things as of Giant.
> ...
> > * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
> >   or libcephfs) clients are in good working order.
> 
> Thanks for all the work and specially for concentrating on CephFS! We
> have been watching and testing for years by now and really hope to
> change our Clusters to CephFS soon.
> 
> For kernel maintenance reasons, we only want to run longterm stable
> kernels. And for performance reasons and because of severe known
> problems we want to avoid Fuse. How good are our chances of a stable
> system with the kernel client in the latest longterm kernel 3.14? Will
> there be further bugfixes or feature backports?

There are important bug fixes missing from 3.14.  IIRC, the EC, cache 
tiering, and firefly CRUSH changes aren't there yet either (they landed in 
3.15), and that is not appropriate for a stable series.

They can be backported, but no commitment yet on that :)

sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux