I can confirm, we used to run all our nodes on btrfs and I cannot recommend anyone of doing that at this time. We had problems with deadlocks and very slow performance and even corruption over time all the way up to kernel 3.13, I haven't tried 3.14 but there's some patches mentioning performance and deadlocks. But again, I would not recommend it. and even if you like to live dangerously; try it thoroughly in a test environment a couple of months. On Wed, May 28, 2014 at 8:58 PM, Mark Nelson <mark.nelson at inktank.com> wrote: > On 05/28/2014 09:19 AM, Cedric Lemarchand wrote: > >> >> Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a ?crit : >> >>> Am 28.05.2014 16:13, schrieb Wido den Hollander: >>> >>>> On 05/28/2014 04:11 PM, VELARTIS Philipp D?rhammer wrote: >>>> >>>>> Is someone using btrfs in production? >>>>> I know people say it?s still not stable. But do we use so many features >>>>> with ceph? And facebook uses it also in production. Would be a big >>>>> speed >>>>> gain. >>>>> >>>> As far as I know the main problem is still performance degradation over >>>> time. On a SSD-only cluster this would be less of a problem since seek >>>> times on SSDs aren't a really big problem, but on spinning disks they >>>> are. >>>> >>>> I haven't seen btrfs in production on any Ceph cluster I encountered. >>>> >>> It heavily fragements over time. >>> >> I just would add that it is inherent to *all* COW based file system, and >> not specifically to BTRFS ;-) >> > > I think the big issues is if the BTRFS defragmentation tools are made safe > for when lots of snapshots are used. BTRFS tends to be very fast with Ceph > on fresh filesystems, but the fragmentation, especially with small writes > to RBD objects, can just kill it. > > > >> Cheers >> >> C?dric >> >> Also no kernel backports are available >>> to stable kernels. So which one would you choose? >>> >>> Stefan >>> >>> >>> >>>>> >>>>> _______________________________________________ >>>>> ceph-users mailing list >>>>> ceph-users at lists.ceph.com >>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>>> >>>>> >>>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users at lists.ceph.com >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> >> > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140529/7e8cbbd9/attachment.htm>