Re: 16.2.10 Cephfs with CTDB, Samba running on Ubuntu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Marco,

Sure thing, I will attach the output of our samba configuration file and our
ctdb configuration file below, I think there is something you might find
interesting in the CTDB configuration file just noting that you are running
ceph in containers.

We have looked into comparing kernel CephFS and vfs_ceph too, what we have
found is that for a single client transfer, i.e to maximize speeds for a
single operation the kernel mount is better, but the vfs_ceph module
performs better at scale/throughput. 

Since the vfs_ceph module creates individual CephFS mounts for each samba
client, you get the nice benefit of each client getting a mount and thus
better throughput when you have say 100s of clients, but the downside is for
an individual client userspace sets the ceiling on performance lower than a
kernel mount would.

What we've been doing is setting it up so each share is its own CephFS
kernel mount, i.e /mnt/cephfs/share1 and /mnt/cephfs/share2 are 2 individual
kernel mounts directly to the path within the CephFS you want to share out. 

The good thing about the kernel mounts is you would need quite a large
number of clients using it at the same time where the throughput/ability of
a single CephFS mount to handle it would become an issue, but if you do have
a use case where there may be 1000's of clients accessing a single SMB
share, it may be worth looking into using the vfs_ceph module.

To explain it more simply: kernel CephFS mounts with samba performs better
for single client/maximizing performance for single client, vfs_ceph module
performs better for multi client/maximizing performance for throughput.

Here is the configuration files:

samba conf:

[global]
  #Domain Member
  security = ads
  #CTDB Node
  clustering = yes

  netbios name = SUPPORT-CEPH
  server string = Samba %v

  idmap config * : backend = tdb2
  idmap config * : range = 10000-99999
  idmap config DOMAIN : backend = rid
  idmap config DOMAIN : range = 100000 - 999999

  realm = DOMAIN.LOCAL
  workgroup = DOMAIN

  winbind enum groups = True
  winbind enum users = True
  winbind use default domain = True
  winbind refresh tickets = yes
  winbind offline logon = yes
  template shell = /bin/bash



  ea support = yes
  map acl inherit = yes
  store dos attributes = yes
  vfs objects = acl_xattr

  log level = 0
  registry shares = yes
  include = registry

ctdb conf:

[legacy]
    #realtime scheduling = true will cause ctdb to fail when docker
containers are running
    realtime scheduling = false

[cluster]
    CTDB_SET_DeterministicIPs=1
    recovery lock =
!/usr/lib/x86_64-linux-gnu/ctdb/ctdb_mutex_ceph_rados_helper ceph
client.samba cephfs_metadata ctdb_lock

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux