Hi all, i‘m running a 3 node ceph cluster setup with collocated mons and mds for actually 3 filesystems at home since mimic. I’m planning to downgrade to one FS and use RBD in the future, but this is another story. I’m using the cluster as cold storage on spindles with EC-pools for archive purposes. The cluster usually does not run 24/7. I actually managed to upgrade to octopus without problems yesterday. So first of all: great job with the release. Now I have a little problem and a general question to address. I have tried to share the CephFS via samba and the ceph-vfs module but I could not manage to get write access (read access is not a problem) to the share (even with the admin key). When I share the mounted path (kernel module or fuser mount) instead as usual there are no problems at all. Is ceph-vfs generally read only and I missed this point? Furthermore I suppose, that there is no possibility to choose between the different mds namespaces, right? Now the general question. Since the cluster does not run 24/7 as stated and is turned on perhaps once a week for a couple of hours on demand, what are reasonable settings for the scrubbing intervals? As I said, the storage is cold and there is mostly read i/o. The archiving process adds approximately 0.5 % of new data of the cluster’s total storage capacity. Stay healthy and regards, Marco Savoca _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx