Re: First 6 nodes cluster with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/31/21 2:52 PM, mabi wrote:
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, March 31, 2021 9:01 AM, Stefan Kooman <stefan@xxxxxx> wrote:

For best performance you want to give the MONs their own disk,
preferably flash. Ceph MONs start to use disk space when the cluster is
in an unhealthy state (as to keep track of all PG changes). So it
depends as well. If you know you can fix any kind of disk / hardware
problem within a certain time frame, you don't need that big drives.
But if the MONs run out of disk space it's a show stopper.

You might run into deadlocks when trying to use CephFS on the MDS
nodes themselves, so try to avoid that.

Thanks Stefan for your answer.

That totally makes sense, so on my nodes 1, 2 and 3 which will have MON+MGR+MDS I will add an additional SSD disk just for MON.

Now because I am planning to use cephadm which deploys everything in containers what would be the best point point in order to mount that dedidcated SSD disk? Would you suggest to simply mount that disk under /var/lib/docker or even /var/lib/docker/volumes and like that have all the containers (MGR+MDS+MON) use the dedicated disk? or is there a ceph.conf config parameter where I can say which disk I want to use for MON? or any other best practice suggestions in this regard?


The ceph.conf file, nowadays, is hardly used anymore. It's only there to tell the daemons where the monitors can be found. And if you used DNS based configuration, you don't need them at all. You can still use them howerver, but you do not use them to configure disk layout and such.

cephadm uses podman. And from the documentation [1] you should be able to specify a directory for the monitors to use:

"Daemon containers deployed with cephadm, however, do not need /etc/ceph at all. Use the --output-dir *<directory>* option to put them in a different directory (for example, .). This may help avoid conflicts with an existing Ceph configuration (cephadm or otherwise) on the same host."

So yeah, it looks like you can just mount the SSD in a suitable place and tell cephadm what it is. For the OSDs you can use a "spec" file (YAML) to specify how and what drives you want to use: ceph orch daemon add osd *<host>*:*<device-path>*

Maybe that also works for the monitor disk? Not sure, as I have not (yet) deployed with cephadm. In our docker setup I mount the SSD bedore the ceph services (docker containers) are started (systemd mnt dependency).

Gr. Stefan

[1]: https://docs.ceph.com/en/latest/cephadm/install/#further-information-about-cephadm-bootstrap
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux