Re: SMB Service in Squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was able to get it to work - while the samba vfs_ceph document showed the format of /non-mounted/cephfs/path, it is really much simpler -  it already can pick up the file system, so the path is the Cephfs path, or im my case since I want the root, it is "/". 

While I still need to configure the permissions, and determine how to set it up for a cluster, this is a big first step.

Thanks
Rob

-----Original Message-----
From: Robert W. Eckert <rob@xxxxxxxxxxxxxxx> 
Sent: Tuesday, September 3, 2024 7:49 PM
To: John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>; ceph-users@xxxxxxx
Subject:  Re: SMB Service in Squid

Thanks- I have the .smb pool, and the container is picking up the config.

After fixing a few errors in my config.json, (I had an underscore between vfs and objects)  I am connecting to the smb server, but not able to get to the share, I am not sure if I have something misconfigured with permissions or the share configuration.

My share looks like: 
},
    "shares": {
      "ceph": {
        "options": {            
            "path": "/non-mounted/cephfs/home/",
            "kernel share modes": "no",
            "vfs objects": "ceph",
            "valid users": "rob"
        }
      }
    },

- The Filesystem is home  - I tried a few different variations of the path :  /non-mounted/home/  /non-mounted/cephfs,...
I made sure the user has access to the file system as well.

But I keep getting this error:

print_impersonation_info: Impersonated user: uid=(1000,1000), gid=(0,1000), cwd=[/] setting sec ctx (0, 0) - sec_ctx_stack_ndx = 0 Security token: (NULL) UNIX token of user 0 Primary group is 0 and contains 0 supplementary groups
change_to_root_user: now uid=(0,0) gid=(0,0)
make_connection_snum: '/non-mounted/cephfs/home' does not exist or permission denied when connecting to [ceph] Error was No such file or directory
dbwrap_lock_order_lock: check lock order 1 for /var/lib/samba/lock/smbXsrv_tcon_global.tdb
dbwrap_lock_order_unlock: release lock order 1 for /var/lib/samba/lock/smbXsrv_tcon_global.tdb
smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_BAD_NETWORK_NAME] || at ../../source3/smbd/smb2_tcon.c:151
signed SMB2 message (sign_algo_id=2)

Thanks,
Rob

-----Original Message-----
From: John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
Sent: Tuesday, September 3, 2024 5:19 PM
To: ceph-users@xxxxxxx
Cc: Robert W. Eckert <rob@xxxxxxxxxxxxxxx>
Subject: Re:  SMB Service in Squid

On Tuesday, September 3, 2024 5:00:20 PM EDT Robert W. Eckert wrote:
> When I try to create the .smb pool, I get an error message:
> 
> # ceph osd pool create .smb
> pool names beginning with . are not allowed

Ah,  I was writing my reply from memory and forgot that to create a pool like that you need an extra option: 
  ceph osd pool create .smb --yes-i-really-mean-it

That said...
> 
> I assume I can just change to using a pool without the leading period.
> 

Yes it should work OK with a different pool name.


> When I do the shares, how do I format the share path?  Does the ceph 
> file system get mounted in a specific location?

Currently, the system doesn't support kernel mounted cephfs. It only uses the Samba vfs plugin `vfs_ceph` or `vfs_ceph_new` [1] [2] . These do not require mount points to work. Eventually, I'd like to add it as an option but it has not been a priority for us so far.

[1] - https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html
[2] - https://git.samba.org/cs/?p=samba.git;a=blob;f=docs-xml/manpages/
vfs_ceph_new.
8.xml;h=b0640a591a51d110622475d278bf03a372b4c073;hb=1c7d4b5b388ae2647732ed54834d5547a8c1357a
(unfortunately no easy to read form on the web yet.... it was just released
today!)


> 
> -Rob
> -----Original Message-----
> From: John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
> Sent: Tuesday, September 3, 2024 4:08 PM
> To: ceph-users@xxxxxxx
> Cc: Robert W. Eckert <rob@xxxxxxxxxxxxxxx>
> Subject: Re:  SMB Service in Squid
> 
> On Tuesday, September 3, 2024 3:42:29 PM EDT Robert W. Eckert wrote:
> > I have upgraded  my home cluster to 19.1.0  and wanted to try out 
> > the SMB orchestration features to improve my hacked SMB shared using 
> > CTDB and SMB services on each host.
> 
> Hi there, thanks for trying out the new SMB stuff!
> 
> > My smb.yaml file looks like
> > 
> > service_type: smb
> > service_id: home
> > 
> > placement:
> >   hosts:
> >     - HOST1
> >     - HOST2
> >     - HOST3
> >     - HOST4
> > 
> > spec:
> >   cluster_id: home
> >   
> >   features:
> >     - domain
> >     
> >       #clustered: true
> >   
> >   config_uri: rados://.smb/home/scc.toml
> >   
> >   custom_dns:
> >     - "<DNS SERVERS>"
> >   
> >   join_sources:
> >     - "rados:mon-config-key:smb/config/home/join1.json"
> >  
> >  # cluster_meta_uri: rados://.smb/home/meta
> > 
> > #cluster_lock_uri: rados://.smb/home/lock
> > 
> >   include_ceph_users:
> >     - client.smb.fs.cluster.home
> >  
> >  #cluster_public_addrs:
> >                 #address: "192.168.2.175"
> >  
> >  #destination: "192.168.2.0/24"
> > 
> > When I first ran ceph orch apply -i smb.yaml, it didn't like the 
> > sections I commented out related to clusters- this may be that I 
> > formatted them wrong?
> This is probably because you are using squid. Those fields are only in 
> main and we do not plan on backporting them (yet, see below).
> >   I would get errors like:
> > Error EINVAL: ServiceSpec: __init__() got an unexpected keyword 
> > argument 'cluster_meta_uri'
> > 
> > After commenting out the clustering (for now), I successfully 
> > applied this YAML, however the .smb pool was never created so I 
> > cannot go on to the next task of fiddling around with the config files and config json.
> > 
> > Is there a way to create the .smb pool manually?
> 
> Yes, you need to create the pool manually with `ceph osd pool create` [1].
> Also assign it an application (for example `smb`) [2]:
> ceph osd pool application enable .smb smb
> 
> 
> [1]
> https://docs.ceph.com/en/latest/rados/operations/pools/#creating-a-poo
> l [2]
> https://docs.ceph.com/en/latest/rados/operations/pools/#associating-a-
> pool-> with-an-application
> 
> Once you have a JSON configuration file you can upload it using the 
> `rados` cli tool. Something like `rados --pool=.smb --namespace=<ns> 
> put <objname> <filename>`
> > Also is there any good basic examples of a config json?    I am not
> > connecting to active directory (On Windows 365 accounts so no local AD).
> 
> The configuration json is defined by the sambacc project [3]. It's a 
> JSON wrapper around samba's configuration plus some container setup magic.
> 
> [3] https://github.com/samba-in-kubernetes/sambacc/blob/master/docs/
> configuration.md
> 
> > I> will eventually write a script to pull the user details and map 
> > I> them to
> 
> the local hosts, but want to get basic services up first.
> 
> 
> That sounds pretty cool. I'd love to see what you come up with.
> 
> 
> One word of caution about the smb service, especially as it appears in 
> Squid. While it should be entirely usable on it's own it will be much 
> more work to configure it manually when you use it directly. We've 
> developed a new smb manager module that allows you to manage clusters 
> and shares without having to know as many of the lower level details 
> needed to use the service spec directly. That said, I will support the 
> service if you find bugs, etc. It's just not the intended interface 
> for most users of smb on Ceph. This is similar to Ceph's NFS module and NFS service (IMO).
> 
> It happens that the smb service spec got added to ceph main before 
> squid got branched and so it is available on squid but almost 
> everything else SMB related is only on the Ceph main branch today. The 
> smb module and related features should be usable by the wider 
> community with the Ceph Tentacle release. Adam King and I have 
> discussed the possibility of feature backports to Squid but I wanted 
> to get the overall suite of smb things more mature on ceph main first.



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux