Re: cephfs and samba

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Felix,

The last time we looked at Samba was with Nautilus, accessed via a Scientific Linux 7.4 server and client. We didn't use clustering, or the VFS module.

Performance was disappointing, though our use case is small numbers of clients, with large Scientific data sets.


------------------------------------------------------------
Connecting to a ceph cluster:

------------------------------------------------------------
r=7508,w=2488 IOPS

[root@fmb110 ~]# mount -t cifs //cephfs/gollum/ /tmp/gollum -o uid=1084,gid=1084,username=gollum,password=XXXXXXXXX,vers=3.0,sec=ntlmv2,cache=loose --verbose mount.cifs kernel mount operations: ip=10.1.3.29,unc=\\cephfs\gollum,vers=3.0,sec=ntlmv2,cache=loose,uid=1084,gid=1084,user=gollum,pass=********

-bash-4.2$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=29.3MiB/s,w=9952KiB/s][r=7508,w=2488 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1989567: Fri Jan 29 10:49:35 2021
   read: IOPS=7528, BW=29.4MiB/s (30.8MB/s)(3070MiB/104390msec)
bw ( KiB/s): min=15592, max=33416, per=99.98%, avg=30107.59, stdev=2755.56, samples=208 iops : min= 3898, max= 8354, avg=7526.88, stdev=688.89, samples=208
  write: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(1026MiB/104390msec)
bw ( KiB/s): min= 5320, max=11552, per=99.99%, avg=10062.51, stdev=975.01, samples=208 iops : min= 1330, max= 2888, avg=2515.61, stdev=243.75, samples=208
  cpu          : usr=2.53%, sys=14.27%, ctx=1048710, majf=0, minf=596
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=3070MiB (3219MB), run=104390-104390msec WRITE: bw=9.83MiB/s (10.3MB/s), 9.83MiB/s-9.83MiB/s (10.3MB/s-10.3MB/s), io=1026MiB (1076MB), run=104390-104390msec

------------------------------------------------------------
Here we connect to a Windows 10 server with hardware RAID

------------------------------------------------------------
[r=28.9k,w=9460 IOPS]

fmb100_jog> cd /kate/Jake_tests/
fmb100_jog> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=113MiB/s,w=36.0MiB/s][r=28.9k,w=9460 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2469879: Fri Jan 29 10:19:43 2021
   read: IOPS=22.3k, BW=86.0MiB/s (91.2MB/s)(3070MiB/35295msec)
bw ( KiB/s): min= 360, max=131192, per=100.00%, avg=92755.55, stdev=29910.00, samples=67 iops : min= 90, max=32798, avg=23188.88, stdev=7477.50, samples=67
  write: IOPS=7441, BW=29.1MiB/s (30.5MB/s)(1026MiB/35295msec)
bw ( KiB/s): min= 104, max=43568, per=100.00%, avg=30538.91, stdev=10557.51, samples=68 iops : min= 26, max=10892, avg=7634.71, stdev=2639.37, samples=68
  cpu          : usr=6.70%, sys=37.12%, ctx=683686, majf=0, minf=889
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=86.0MiB/s (91.2MB/s), 86.0MiB/s-86.0MiB/s (91.2MB/s-91.2MB/s), io=3070MiB (3219MB), run=35295-35295msec WRITE: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=1026MiB (1076MB), run=35295-35295msec

------------------------------------------------------------
We have done some testing with the native Windows Cephfs driver,

https://cloudbase.it/ceph-for-windows/
Performance was fantastic on a Windows client:
ATTO shows up to 600MB/s write, 527MB/s read, with 1GB file size

however it doesn't support mandatory file locks, and only maps to a single UID/GID per mount, so it's not suitable for general use.
------------------------------------------------------------

You might also test out MoSMB, though this comes at a cost.


Our plan going forward, is to see if a combination of the new ceph kernel driver in AlmaLinux 8.6, plus a recent version of Samba, together with Quincy improve performance...

best regards

Jake


--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux