Re: Balanced MDS, all as active and recomended client settings.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




2018-02-21 19:26 GMT+01:00 Daniel Carrasco <d.carrasco@xxxxxxxxx>:
Hello,

I've created a Ceph cluster with 3 nodes to serve files to an high traffic webpage. I've configured two MDS as active and one as standby, but after add the new system to production I've noticed that MDS are not balanced and one server get the most of clients petitions (One MDS about 700 or less vs 4.000 or more the other).

Is possible to make a better distribution on the MDS load of both nodes?.
Is posible to set all nodes as Active without problems?

I know that is possible to set max_mds to 3 and all will be active, but I want to know what happen if one node goes down for example, or if there are another side effects.


My last question is if someone can recomend me a good client configuration like cache size, and maybe something to lower the metadata servers load.


Thanks!!

I forgot to say my configuration xD.

I've a three nodes cluster with AIO:
  • 3 Monitors
  • 3 OSD 
  • 3 MDS (2 actives and one standby)
  • 3 MGR (1 active)
The data has 3 copies, so is in every node.

My configuration file is:
[global]
fsid = BlahBlahBlah
mon_initial_members = fs-01, fs-02, fs-03
mon_host = 192.168.4.199,192.168.4.200,192.168.4.201
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 192.168.4.0/24
osd pool default size = 3


##
### OSD
##
[osd]
  osd_pool_default_pg_num = 128
  osd_pool_default_pgp_num = 128
  osd_pool_default_size = 3
  osd_pool_default_min_size = 2

  osd_mon_heartbeat_interval = 5
  osd_mon_report_interval_max = 10
  osd_heartbeat_grace = 15
  osd_fast_fail_on_connection_refused = True


##
### MON
##
[mon]
  mon_osd_min_down_reporters = 2

##
### MDS
##
[mds]
  mds_cache_size = 250000
  mds_cache_memory_limit = 792723456

##
### Client
##
[client]
  client_cache_size = 32768
  client_mount_timeout = 30
  client_oc_max_objects = 2000
  client_oc_size = 629145600
  rbd_cache = true
  rbd_cache_size = 671088640


Thanks!!!

--
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux