Re: First 6 nodes cluster with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/30/21 9:02 PM, mabi wrote:
Hello,

I am planning to setup a small Ceph cluster for testing purpose with 6 Ubuntu nodes and have a few questions mostly regarding planning of the infra.

1) Based on the documentation the OS requirements mentions Ubuntu 18.04 LTS, is it ok to use Ubuntu 20.04 instead or should I stick with 18.04?

20.04 is also an LTS, and perfectly fine to use. On a test cluster we use it with ceph-ansible with a docker deployment.


2) The documentation recommends using Cephadm for new deployments, so I will use that but I read that with Cephadm everything is running in containers, so is this the new way to go? Or is Ceph in containers kind of still experimental?

It should just work. There is an ongoing effort to improve the documentation for it. It will get even better in future releases (Pacific and beyond).


3) As I will be needing cephfs I will also need MDS servers so with a total of 6 nodes I am planning the following layout:

Node 1: MGR+MON+MDS
Node 2: MGR+MON+MDS
Node 3: MGR+MON+MDS
Node 4: OSD
Node 5: OSD
Node 6: OSD

Does this make sense? I am mostly interested in stability and HA with this setup.

It depends on the specifications of the systems. If you want the least amount of surprises you isolate each daemon on separate hardware. But that's not a requirement, just from a ops point of view (my opinion). For a test setup its probably fine. Do note that MDS daemons can use a lot of memory (if you allow them). So it depends on the workloads you want to test. The MDS / MGR don't use any local disks to store state or whatsoever. But if you want to enable debug logging in some point in time you needs loads of disk space as it can easily grow with GB/s per minute.


4) Is there any special kind of demand in terms of disks on the MGR+MON+MDS nodes? Or can I use have my OS disks on these nodes? As far as I understand the MDS will create a metadata pool on the OSDs.

For best performance you want to give the MONs their own disk, preferably flash. Ceph MONs start to use disk space when the cluster is in an unhealthy state (as to keep track of all PG changes). So it depends as well. If you know you can fix any kind of disk / hardware problem within a certain time frame, you don't need *that* big drives. But if the MONs run out of disk space it's a show stopper.

You *might* run into deadlocks when trying to use CephFS on the MDS nodes themselves, so try to avoid that.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux