Re: First 6 nodes cluster with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 30, 2021 at 2:03 PM mabi <mabi@xxxxxxxxxxxxx> wrote:
>
> Hello,
>
> I am planning to setup a small Ceph cluster for testing purpose with 6 Ubuntu nodes and have a few questions mostly regarding planning of the infra.
>
> 1) Based on the documentation the OS requirements mentions Ubuntu 18.04 LTS, is it ok to use Ubuntu 20.04 instead or should I stick with 18.04?

All of our clusters are 20.04.2 + HWE kernel, they work wonderfully.

> 2) The documentation recommends using Cephadm for new deployments, so I will use that but I read that with Cephadm everything is running in containers, so is this the new way to go? Or is Ceph in containers kind of still experimental?

We use cephadm + podman for our production clusters, and have had a
great experience. You just need to know how to operate with
containers, so make sure to do some reading about how containers work.
We're using Octopus 15.2.10 (started with earlier 15.2.x and have
upgraded). We will be upgrading to Pacific in the future.

> 3) As I will be needing cephfs I will also need MDS servers so with a total of 6 nodes I am planning the following layout:
>
> Node 1: MGR+MON+MDS
> Node 2: MGR+MON+MDS
> Node 3: MGR+MON+MDS
> Node 4: OSD
> Node 5: OSD
> Node 6: OSD
>
> Does this make sense? I am mostly interested in stability and HA with this setup.

We don't use CephFS, so I can't help here.

> 4) Is there any special kind of demand in terms of disks on the MGR+MON+MDS nodes? Or can I use have my OS disks on these nodes? As far as I understand the MDS will create a metadata pool on the OSDs.

Same, no MDS experience.


> Thanks for the hints.
>
> Best,
> Mabi
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux