Oh that's good. I thought the kernel clients only supported block devices. I guess that has changed since I last looked. Cheers, Kingsley. On Tue, 2017-01-17 at 12:29 -0500, Alex Evonosky wrote: > example: > Each server looks like this on their mounting: > > /bin/mount -t ceph -o name=admin,secret=<cephx secret> > 10.10.10.138,10.10.10.252,10.10.10.103:/ /media/network-storage > > > > all points to the monitor servers. > > On Tue, Jan 17, 2017 at 12:27 PM, Alex Evonosky > <alex.evonosky@xxxxxxxxx> wrote: > yes they are. I created one volume all shared by the > webservers. So essentially is acting like a NAS using NFS. > All servers see the same data. > > On Tue, Jan 17, 2017 at 12:26 PM, Kingsley Tart > <ceph@xxxxxxxxxxx> wrote: > Hi, > > Are these all sharing the same volume? > > Cheers, > Kingsley. > > On Tue, 2017-01-17 at 12:19 -0500, Alex Evonosky > wrote: > > for whats its worth, I have been using CephFS shared > between six > > servers (all kernel mounted) and no issues. Running > three monitors > > and 2 meta servers (one as backup). This has been > running great. > > > > On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tart > <ceph@xxxxxxxxxxx> > > wrote: > > On Tue, 2017-01-17 at 13:49 +0100, Loris > Cuoghi wrote: > > > I think you're confusing CephFS kernel > client and RBD kernel > > client. > > > > > > The Linux kernel contains both: > > > > > > * a module ceph.ko for accessing a CephFS > > > * a module rbd.ko for accessing an RBD > (Rados Block Device) > > > > > > You can mount a CephFS using the kernel > driver [0], or using > > an > > > userspace helper for FUSE [1]. > > > > > > [0] > http://docs.ceph.com/docs/master/cephfs/kernel/ > > > [1] > http://docs.ceph.com/docs/master/cephfs/fuse/ > > > > Hi, > > > > Thanks for your reply. > > > > I specifically didn't want a block device > because I would like > > to mount > > the same volume on multiple machines to > share the files, like > > you would > > with NFS. This is why I thought ceph-fuse > would be what I > > needed. > > > > -- > > Cheers, > > Kingsley. > > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com