Re: Cephadm cluster with multiple MDS containers per server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We operate a Ceph Nautilus (15.2.13) cluster with ~400 OSDs and 4

which one is it Nautilus (14.2.X) or Octopus (15.2.X)?

A few months back a similar question was asked for multiple rgw daemons per host [1] and there was no real answer, only that the docs [2] are ahead of the implementation. I don't think there's a solution for multi-mds-per-node for cephadm yet.

[1] https://www.mail-archive.com/ceph-users@xxxxxxx/msg09833.html
[2] https://docs.ceph.com/en/latest/cephadm/services/rgw/


Zitat von "McLennan, Kali A." <kali_ann@xxxxxx>:

We operate a Ceph Nautilus (15.2.13) cluster with ~400 OSDs and 4 dedicated MDS servers. Currently we are running 2 active, 2 standby MDS, but would like to scale the MDS containers horizontally on each of the physical MDS servers.

The MDS servers in question have dual 16-core Xeon processors with 128GB of physical RAM. Active resource utilization is around the expected 1-core and 8GB of RAM based on MDS cache settings. Ceph documentation from prior to cephadm suggest to scale the number of MDS daemons per node, but attempts to work out how to do this on a containerized cluster have not been productive. Has anyone worked out how to run multiple MDS containers on the same physical server?

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux