Hi Max, I do use CephFS (Giant) in a production environment. It works really well, but I have backups ready to use, just in case. As Wido said, kernel version is not really relevant if you use ceph-fuse (which I recommend over cephfs kernel, for stability and ease of upgrade reasons). However, I found ceph-mds memory usage hard to predict, and I had some problems with that. At first it was undersized (16GB, for ~8M files / dirs, and 1M inodes cached), but it worked well until I had a server crash who did not recover (mds rejoin / rebuild) because of the lack of memory. So I gave it 24GB memory + 24GB swap, no problem anymore. -- Thomas Lemarchand Cloud Solutions SAS - Responsable des systèmes d'information On dim., 2014-12-28 at 14:12 +0100, Max Power wrote: > Hi, my cluster setup would be much easier if I use cephfs on it (instead of a > block device with ocfs2 or something else). But its said everywhere that it is > not ready for production-use at this time. I wonder what this is all about? > > Does it mean that there are a few features missing or is the code full of bugs? > I want to use cephfs fuse drivers (v0.90) with a 3.18.1 kernel. Can I give my > data to such a configuration without destroying? Or will there be tears and > scrambled files? I heard that the fuse driver is more stable than the kernel > driver? > > It looks like we are getting closer to a v1.00 - will cephfs be production ready > at this point? Is it possible to say "production ready" at any time anyway? > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com