Re: v0.86 released (Giant release candidate)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/10/2014 11:26 AM, Florian Haas wrote:
> Hi Sage,
> 
> On Tue, Oct 7, 2014 at 9:20 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
>> This is a release candidate for Giant, which will hopefully be out in
>> another week or two (s v0.86).  We did a feature freeze about a month ago
>> and since then have been doing only stabilization and bug fixing (and a
>> handful on low-risk enhancements).  A fair bit of new functionality went
>> into the final sprint, but it's baked for quite a while now and we're
>> feeling pretty good about it.
>>
>> Major items include:
>>
>> * librados locking refactor to improve scaling and client performance
>> * local recovery code (LRC) erasure code plugin to trade some
>>   additional storage overhead for improved recovery performance
>> * LTTNG tracing framework, with initial tracepoints in librados,
>>   librbd, and the OSD FileStore backend
>> * separate monitor audit log for all administrative commands
>> * asynchronos monitor transaction commits to reduce the impact on
>>   monitor read requests while processing updates
>> * low-level tool for working with individual OSD data stores for
>>   debugging, recovery, and testing
>> * many MDS improvements (bug fixes, health reporting)
>>
>> There are still a handful of known bugs in this release, but nothing
>> severe enough to prevent a release.  By and large we are pretty
>> pleased with the stability and expect the final Giant release to be
>> quite reliable.
>>
>> Please try this out on your non-production clusters for a preview.
> 
> Thanks for the summary! Since you mentioned MDS improvements, and just
> so it doesn't get lost: as you hinted at in off-list email, please do
> provide a write-up of CephFS features expected to work in Giant at the
> time of the release (broken down by kernel client vs. Ceph-FUSE, if
> necessary). Not in the sense that anyone is offering commercial
> support, but in the sense of "if you use this limited feature set, we
> are confident that it at least won't eat your data." I think that
> would be beneficial to a large portion of the user base, and clear up
> a lot of the present confusion about the maturity and stability of the
> filesystem.
> 

Agreed, that would be very useful. In some deployment people want to use
CephFS (with all the risks that come with it), but the limitations
should be known(ish).

It's probably that you should stay away from snapshots and multi-MDS,
but there is probably more!

> Cheers,
> Florian
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux