Hi Marco and Jeff, On Fri, 27 Mar 2020 08:04:56 -0400, Jeff Layton wrote: > > i‘m running a 3 node ceph cluster setup with collocated mons and mds > > for actually 3 filesystems at home since mimic. I’m planning to > > downgrade to one FS and use RBD in the future, but this is another > > story. I’m using the cluster as cold storage on spindles with EC-pools > > for archive purposes. The cluster usually does not run 24/7. I > > actually managed to upgrade to octopus without problems yesterday. So > > first of all: great job with the release. > > > > Now I have a little problem and a general question to address. > > > > I have tried to share the CephFS via samba and the ceph-vfs module but > > I could not manage to get write access (read access is not a problem) > > to the share (even with the admin key). When I share the mounted path > > (kernel module or fuser mount) instead as usual there are no problems > > at all. Is ceph-vfs generally read only and I missed this point? > > No. I haven't tested it in some time, but it does allow clients to > write. When you say you can't get write access, what are you doing to > test this, and what error are you getting back? Is write access granted via a supplementary group ID? If so, this might be https://bugzilla.samba.org/show_bug.cgi?id=14053 . Fixing libcephfs supplementary group ID fallback behaviour was discussed earlier via https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/PCIOZRE5FJCQ2LZXLZCN5O2AA5AYU4KF/ Cheers, David _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx