Gluster 3.4 Samba VFS writes slow in Win 7 clients

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kane,

1. Which version of samba are you running?

2. Can you re-run the test after adding the following lines to smb.conf's
global section and tell if it helps?
kernel oplocks = no
stat cache = no

Thanks,
Raghavendra Talur


On Wed, Aug 21, 2013 at 3:48 PM, kane <stef_9k at 163.com> wrote:

> Hi Lala, thank you for reply this issue.
>
> this is our smb.conf:
> --------
> [global]
>         workgroup = MYGROUP
>         server string = DCS Samba Server
>         log file = /var/log/samba/log.vfs
>         max log size = 500000
> #       log level = 10
> #       max xmit = 65535
> #       getwd cache = yes
> #       use sendfile = yes
> #       strict sync = no
> #       sync always = no
> #       large readwrite = yes
>         aio read size = 262144
>         aio write size = 262144
>         aio write behind = true
> #       min receivefile size = 262144
>         write cache size = 268435456
> #      oplocks = yes
>         security = user
>         passdb backend = tdbsam
>         load printers = yes
>         cups options = raw
>         read raw = yes
>         write raw = yes
>         max xmit = 262144
>         read size = 262144
>         socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144
> SO_SNDBUF=262144
>         max protocol = SMB2
>
> [homes]
>         comment = Home Directories
>         browseable = no
>         writable = yes
>
>
> [printers]
>         comment = All Printers
>         path = /var/spool/samba
>         browseable = no
>         guest ok = no
>         writable = no
>         printable = yes
>
> [cifs]
>         path = /mnt/fuse
>         guest ok = yes
>         writable = yes
>
> [raw]
>         path = /dcsdata/d0
>         guest ok = yes
>         writable = yes
>
> [gvol]
>         comment = For samba export of volume  test
>         vfs objects = glusterfs
>         glusterfs:volfile_server = localhost
>         glusterfs:volume = soul
>         path = /
>         read only = no
>         guest ok = yes
> --------
>
> our win 7 client hardware:
> Intel? Xeon? E31230 @ 3.20GHz
> 8GB RAM
>
> linux client hardware:
> Intel(R) Xeon(R) CPU           X3430  @ 2.40GHz
> 16GB RAM
>
> pretty thanks
>
> -kane
>
> ? 2013-8-21???4:53?Lalatendu Mohanty <lmohanty at redhat.com> ???
>
>  On 08/21/2013 01:32 PM, kane wrote:
>
> Hello?
>
>  We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to
> test samba performance in windows client.
>
>  two glusterfs server nodes export share with name of "gvol":
> hardwares:
>  brick use a raid 5 logic disk with 8 * 2T SATA HDDs
>  10G network connection
>
>  one linux client mount the "gvol" with cmd:
> [root at localhost current]#  mount.cifs //192.168.100.133/gvol /mnt/vfs -o
> user=kane,pass=123456
>
>  then i use iozone to test the write performance in mount dir "/mnt/vfs":
>  [root at localhost current]# ./iozone -s 10G -r 128k -i0 -t 4
> ?..
>  File size set to 10485760 KB
>  Record Size 128 KB
>  Command line used: ./iozone -s 10G -r 128k -i0 -t 4
>  Output is in Kbytes/sec
>  Time Resolution = 0.000001 seconds.
>  Processor cache size set to 1024 Kbytes.
>  Processor cache line size set to 32 bytes.
>  File stride size set to 17 * record size.
>  Throughput test with 4 processes
>  Each process writes a 10485760 Kbyte file in 128 Kbyte records
>
>  Children see throughput for  4 initial writers =  487376.67 KB/sec
>  Parent sees throughput for  4 initial writers =  486184.67 KB/sec
>  Min throughput per process =  121699.91 KB/sec
>  Max throughput per process =  122005.73 KB/sec
>  Avg throughput per process =  121844.17 KB/sec
>  Min xfer = 10459520.00 KB
>
>  Children see throughput for  4 rewriters =  491416.41 KB/sec
>  Parent sees throughput for  4 rewriters =  490298.11 KB/sec
>  Min throughput per process =  122808.87 KB/sec
>  Max throughput per process =  122937.74 KB/sec
>  Avg throughput per process =  122854.10 KB/sec
>  Min xfer = 10474880.00 KB
>
>  linux client mount with cifs , write performance reach 480MB/s per
> client;
>
>  but when i use win7 client mount the "gvol" with cmd:
> net use Z: \\192.168.100.133\gvol 123456 /user:kane
>
>  then also use iozone test in dir Z, even with write block 1Mbyte :
>          File size set to 10485760 KB
>         Record Size 1024 KB
>         Command line used: iozone -s 10G -r 1m -i0 -t 4
>         Output is in Kbytes/sec
>         Time Resolution = -0.000000 seconds.
>         Processor cache size set to 1024 Kbytes.
>         Processor cache line size set to 32 bytes.
>         File stride size set to 17 * record size.
>         Throughput test with 4 processes
>         Each process writes a 10485760 Kbyte file in 1024 Kbyte records
>
>          Children see throughput for  4 initial writers  =  148164.82
> KB/sec
>         Parent sees throughput for  4 initial writers   =  148015.48 KB/sec
>         Min throughput per process                      =   37039.91 KB/sec
>         Max throughput per process                      =   37044.45 KB/sec
>         Avg throughput per process                      =   37041.21 KB/sec
>         Min xfer                                        = 10484736.00 KB
>
>          Children see throughput for  4 rewriters        =  147642.12
> KB/sec
>         Parent sees throughput for  4 rewriters         =  147472.16 KB/sec
>         Min throughput per process                      =   36909.13 KB/sec
>         Max throughput per process                      =   36913.29 KB/sec
>         Avg throughput per process                      =   36910.53 KB/sec
>         Min xfer                                        = 10484736.00 KB
>
>  iozone test complete.
>
>  then reach 140MB/s
>
>  so , anyone meet with this problem.Is there win7 clinet to reconfigure
> to perform  well?
>
>  Thanks!
>
>  kane
> ----------------------------------------------------------------
> Email:  kai.zhou at soulinfo.com
> ??:    0510-85385788-616
>
>
>
> Hi kane,
>
> I do run IOs using win7 client with glusterfs3.4 , but I never  compared
> the performance with Linux cifs mount. I don't think we need to do any
> special configuration on Windows side. I hope your Linux and Windows client
> have similar configuration i.e. RAM, cache, disk type etc.  However I am
> curious to know if your setup uses the  vfs plug-in correctly. We can
> confirm that looking at smb.conf entry for the gluster volume which should
> have been created by "gluster start command" automatically  .
>
> e.g: entry in smb.conf for one of volume "smbvol" of mine looks like below
>
> [gluster-smbvol]
> comment = For samba share of volume smbvol
> vfs objects = glusterfs
> glusterfs:volume = smbvol
> path = /
> read only = no
> guest ok = yes
>
> Kindly copy the entries in smb.conf  for your gluster volume in this email.
> -Lala
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
*Raghavendra Talur *
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130822/a0b4f98e/attachment-0001.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux