Re: does your samba work with 4.1.x (centos 7.5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Please download the logs from:

https://www.dropbox.com/s/4k0zvmn4izhjtg7/samba-logs.tar.bz2?dl=0

These options had to be set in the [global] section:
kernel change notify = no
kernel oplocks = no

I also set log level = 10

I renamed the file as Test-Project.rvt for simplicity I opened revit
and from revit I opened this file. I then started attempting to save
it as a central at around 22:01:30s The file save then got stuck and
at around 22:05 it finally failed saying that two dat files did not
exist.

Diego
On Tue, Nov 13, 2018 at 8:46 AM Anoop C S <anoopcs@xxxxxxxxxxxxx> wrote:
>
> On Tue, 2018-11-13 at 07:50 -0500, Diego Remolina wrote:
> > >
> > > Thanks for explaining the issue.
> > >
> > > I understand that you are experiencing hang while doing some operations on files/directories in
> > > a
> > > GlusterFS volume share from a Windows client. For simplicity can you attach the output of
> > > following
> > > command:
> > >
> > > # gluster volume info <volume>
> > > # testparm -s --section-name global
> >
> > gluster v status export
> > Status of volume: export
> > Gluster process                             TCP Port  RDMA Port  Online  Pid
> > ------------------------------------------------------------------------------
> > Brick 10.0.1.7:/bricks/hdds/brick           49153     0          Y       2540
> > Brick 10.0.1.6:/bricks/hdds/brick           49153     0          Y       2800
> > Self-heal Daemon on localhost               N/A       N/A        Y       2912
> > Self-heal Daemon on 10.0.1.6                N/A       N/A        Y       3107
> > Self-heal Daemon on 10.0.1.5                N/A       N/A        Y       5877
> >
> > Task Status of Volume export
> > ------------------------------------------------------------------------------
> > There are no active volume tasks
> >
> > # gluster volume info export
> >
> > Volume Name: export
> > Type: Replicate
> > Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 1 x 2 = 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: 10.0.1.7:/bricks/hdds/brick
> > Brick2: 10.0.1.6:/bricks/hdds/brick
> > Options Reconfigured:
> > diagnostics.brick-log-level: INFO
> > diagnostics.client-log-level: INFO
> > performance.cache-max-file-size: 256MB
> > client.event-threads: 5
> > server.event-threads: 5
> > cluster.readdir-optimize: on
> > cluster.lookup-optimize: on
> > performance.io-cache: on
> > performance.io-thread-count: 64
> > nfs.disable: on
> > cluster.server-quorum-type: server
> > performance.cache-size: 10GB
> > server.allow-insecure: on
> > transport.address-family: inet
> > performance.cache-samba-metadata: on
> > features.cache-invalidation-timeout: 600
> > performance.md-cache-timeout: 600
> > features.cache-invalidation: on
> > performance.cache-invalidation: on
> > network.inode-lru-limit: 65536
> > performance.cache-min-file-size: 0
> > performance.stat-prefetch: on
> > cluster.server-quorum-ratio: 51%
> >
> > I had sent you the full smb.conf, so no need to run testparm -s
> > --section-name global, please reference:
> > http://termbin.com/y4j0
>
> Fine.
>
> > >
> > > > This is the test samba share exported using vfs object = glusterfs:
> > > >
> > > > [vfsgluster]
> > > >    path = /vfsgluster
> > > >    browseable = yes
> > > >    create mask = 660
> > > >    directory mask = 770
> > > >    write list = @Staff
> > > >    kernel share modes = No
> > > >    vfs objects = glusterfs
> > > >    glusterfs:loglevel = 7
> > > >    glusterfs:logfile = /var/log/samba/glusterfs-vfsgluster.log
> > > >    glusterfs:volume = export
> > >
> > > Since you have mentioned path as /vfsgluster I hope you are sharing a subdirectory under root of
> > > the
> > > volume.
> >
> > Yes, vfsgluster is a directory at the root of the export volume.
>
> Thanks for the confirmation.
>
> > It is also currently mounted in /export so that the rest of the files can be
> > exported via samba with fuse mounts:
> >
> > # mount | grep export
> > 10.0.1.7:/export on /export type fuse.glusterfs
> > (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)
> >
> > # ls -ld /export/vfsgluster
> > drwxrws---. 4 dijuremo Staff 4096 Nov 12 20:24 /export/vfsgluster
> >
> > >
> > > > Full smb.conf
> > > > http://termbin.com/y4j0
> > >
> > > I see the "clustering" parameter set to 'yes'. How many nodes are there in the cluster? Out of
> > > those
> > > how many are running as samba and/or gluster nodes?
> > >
> >
> > There are a total of 3 gluster peers, but only two have bricks. The
> > other is just present, but not even configured as an arbiter. Two of
> > the nodes with bricks run ctdb and samba.
>
> OK. So basically a two node Samba-CTDB cluster.
>
> > > > /var/log/samba/glusterfs-vfsgluster.log
> > > > http://termbin.com/5hdr
> > > >
> > > > Please let me know if there is any other information I can provide.
> > >
> > > Are there any errors in /var/log/samba/log.<IP/hostname>? IP/hostname = Windows client machine
> > >
> >
> > I do not currently have the log file directive enabled in smb.conf, I
> > would have to enable it. Do you need me to repeat the process with it?
>
> Yes, preferably after adding the following parameters to [vfsgluster] share section(and of course a
> restart):
>
> kernel change notify = no
> kernel oplocks = no
> posix locking = no
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux