On Fri, 2020-03-27 at 12:00 +0100, Marco Savoca wrote: > Hi all, > > i‘m running a 3 node ceph cluster setup with collocated mons and mds > for actually 3 filesystems at home since mimic. I’m planning to > downgrade to one FS and use RBD in the future, but this is another > story. I’m using the cluster as cold storage on spindles with EC-pools > for archive purposes. The cluster usually does not run 24/7. I > actually managed to upgrade to octopus without problems yesterday. So > first of all: great job with the release. > > Now I have a little problem and a general question to address. > > I have tried to share the CephFS via samba and the ceph-vfs module but > I could not manage to get write access (read access is not a problem) > to the share (even with the admin key). When I share the mounted path > (kernel module or fuser mount) instead as usual there are no problems > at all. Is ceph-vfs generally read only and I missed this point? No. I haven't tested it in some time, but it does allow clients to write. When you say you can't get write access, what are you doing to test this, and what error are you getting back? > Furthermore I suppose, that there is no possibility to choose between > the different mds namespaces, right? > Yeah, doesn't look like anyone has added that. That would probably be pretty easy to add, though it would take a little while to trickle out to the distros. > Now the general question. Since the cluster does not run 24/7 as > stated and is turned on perhaps once a week for a couple of hours on > demand, what are reasonable settings for the scrubbing intervals? As I > said, the storage is cold and there is mostly read i/o. The archiving > process adds approximately 0.5 % of new data of the cluster’s total > storage capacity. -- Jeff Layton <jlayton@xxxxxxxxxx> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx