Re: First 6 nodes cluster with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mabi;

We're running Nautilus, and I am not wholly convinced of the "everything in containers" view of the world, so take this with a small grain  of salt...

1) We don't run Ubuntu, sorry.  I suspect the documentation highlights 18.04 because it's the current LTS release.  Personally, if I had a preference of 20.04 over 18.04, I would attempt to build a cluster on 20.04, and see how it goes.  You might also look at this: https://www.server-world.info/en/note?os=Ubuntu_20.04&p=ceph15&f=1

2) Containers are the preferred way of doing things in Octopus, so yes it's considered stable.

3) Our first evaluation cluster was 3 Intel Atom C3000 nodes, with each node running all the daemons (MON, MGR, MDS, 2 x OSD).  Worked fine, and allowed me to demonstrate the concepts in a size I could carry around.

4) Yes, and No...  When the Cluster is happy, everything is generally happy.  In certain Warning and Error situations, MONs can chew through the HD space fairly quickly.  I'm not familiar with the HD usage of the daemons.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com

-----Original Message-----
From: mabi [mailto:mabi@xxxxxxxxxxxxx] 
Sent: Tuesday, March 30, 2021 12:03 PM
To: ceph-users@xxxxxxx
Subject:  First 6 nodes cluster with Octopus

Hello,

I am planning to setup a small Ceph cluster for testing purpose with 6 Ubuntu nodes and have a few questions mostly regarding planning of the infra.

1) Based on the documentation the OS requirements mentions Ubuntu 18.04 LTS, is it ok to use Ubuntu 20.04 instead or should I stick with 18.04?

2) The documentation recommends using Cephadm for new deployments, so I will use that but I read that with Cephadm everything is running in containers, so is this the new way to go? Or is Ceph in containers kind of still experimental?

3) As I will be needing cephfs I will also need MDS servers so with a total of 6 nodes I am planning the following layout:

Node 1: MGR+MON+MDS
Node 2: MGR+MON+MDS
Node 3: MGR+MON+MDS
Node 4: OSD
Node 5: OSD
Node 6: OSD

Does this make sense? I am mostly interested in stability and HA with this setup.

4) Is there any special kind of demand in terms of disks on the MGR+MON+MDS nodes? Or can I use have my OS disks on these nodes? As far as I understand the MDS will create a metadata pool on the OSDs.

Thanks for the hints.

Best,
Mabi



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux