On Fri, Jul 22, 2016 at 9:11 AM, Ira Cooper <ira@xxxxxxxxxxx> wrote: > On Fri, Jul 22, 2016 at 08:29:39AM +1100, Blair Bethwaite wrote: >> Ken, Ira, John - >> >> Thanks a lot for the replies. Our initial setup is simply running >> samba atop a cephfs kernel mount, and initial cursory checks seem to >> show the basics are working as expected (even clustered with ctdb - >> what are your concerns here Ira?). Though we've yet to try any of our >> planned test scenarios/datasets. >> >> From the conversation here I'm thinking we'd be better off using >> Xenial for our CephFS+Samba test nodes at the moment... though I only >> see packages for Precise and Trusty on gitbuilder.ceph.com at the >> moment. We were planning to compare samba on kernel mount versus ceph >> vfs. >> >> Can someone clarify what state the cephfs kernel client in RHEL 7.x >> will be in when RHCS 2.0 is released, i.e., will that be back-ported >> or are RHEL users expected to use FUSE? (I'm happy to go ask support >> directly but I suspect this is useful information for others too). We actively back-port fixes to RHEL 7.x kernel. When RHCS2.0 release, the RHEL kernel should contain fixes up to 3.7 upstream kernel. Regards Yan, Zheng > > Please note: icooper@xxxxxxxxxx is not on this list, so I can't > reply from there ;). > > I recommend you test your setup THROUGHLY. CTDB is really in > place to handle failures, so test node failures. I recommend > hard power-offs for this. The real world is rarely as kind > as a "nice" poweroff, and it might trick you into thinking > more is working than is. > > If you get it right, all the nodes should end up banned if I > remember right. It's been a bit since I tested with stock > settings. > > For my setup I use: > > FUSE for the mount so CTDB works. > vfs_ceph for the Samba datapath. > > That's what I'd recommend based on talking to people. > > No idea on RHCS/Kernel. > > Cheers, > > -Ira > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com