Re: samba ceph-vfs and scrubbing interval

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff,

i have digged a bit deeper and I was able to get write access on the ceph-vfs share. But the file and directory permissions are a mess. It seems that ceph-VfL does not evaluate the secondary group permissions.  


> Am 27.03.2020 um 18:14 schrieb Dr. Marco Savoca <quaternionma@xxxxxxxxx>:
> 
> 
> „No. I haven't tested it in some time, but it does allow clients to
> write. When you say you can't get write access, what are you doing to
> test this, and what error are you getting back?“
>  
> For example: after successfull connection via smbclient it says „NT_STATUS_ACCESS_DENIED making remote Directory“ or
>  
> „NT_STATUS_ACCESS_DENIED deleting remote file“ when I try to make a directory or to delete a file. If the same samba user connects to a share with a mounted path, everything works as expeted, so that there should not be some ACL Errors.
>  
>  
>  
>  
>  
> Von: Jeff Layton
> Gesendet: Freitag, 27. März 2020 13:05
> An: Marco Savoca; ceph-users@xxxxxxx
> Betreff: Re:  samba ceph-vfs and scrubbing interval
>  
> On Fri, 2020-03-27 at 12:00 +0100, Marco Savoca wrote:
> > Hi all,
> >
> > i‘m running a 3 node ceph cluster setup with collocated mons and mds
> > for actually 3 filesystems at home since mimic. I’m planning to
> > downgrade to one FS and use RBD in the future, but this is another
> > story. I’m using the cluster as cold storage on spindles with EC-pools
> > for archive purposes. The cluster usually does not run 24/7. I
> > actually managed to upgrade to octopus without problems yesterday. So
> > first of all: great job with the release.
> >
> > Now I have a little problem and a general question to address.
> >
> > I have tried to share the CephFS via samba and the ceph-vfs module but
> > I could not manage to get write access (read access is not a problem)
> > to the share (even with the admin key). When I share the mounted path
> > (kernel module or fuser mount) instead as usual there are no problems
> > at all.  Is ceph-vfs generally read only and I missed this point?
>  
> No. I haven't tested it in some time, but it does allow clients to
> write. When you say you can't get write access, what are you doing to
> test this, and what error are you getting back?
>  
> > Furthermore I suppose, that there is no possibility to choose between
> > the different mds namespaces, right?
> >
>  
> Yeah, doesn't look like anyone has added that. That would probably be
> pretty easy to add, though it would take a little while to trickle out
> to the distros.
>  
> > Now the general question. Since the cluster does not run 24/7 as
> > stated and is turned on perhaps once a week for a couple of hours on
> > demand, what are reasonable settings for the scrubbing intervals? As I
> > said, the storage is cold and there is mostly read i/o. The archiving
> > process adds approximately 0.5 % of new data of the cluster’s total
> > storage capacity.
>  
> --
> Jeff Layton <jlayton@xxxxxxxxxx>
>  
>  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux