Re: How to use hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Albert ,

5 instead of 3 mon will allow you to limit the impact if you break a mon
(for example, with the file system full)

5 instead of 3 MDS, this makes sense if the workload can be distributed
over several trees in your file system. Sometimes it can also make sense to
have several FSs in order to limit the consequences of an infrastructure
with several active MDSs.

Concerning performance, if you see a node that is too busy which impacts
the cluster, you can always think about relocating certain services.



Le ven. 17 nov. 2023 à 11:00, Albert Shih <Albert.Shih@xxxxxxxx> a écrit :

> Hi everyone,
>
> In the purpose to deploy a medium size of ceph cluster (300 OSD) we have 6
> bare-metal server for the OSD, and 5 bare-metal server for the service
> (MDS, Mon, etc.)
>
> Those 5 bare-metal server have each 48 cores and 256 Gb.
>
> What would be the smartest way to use those 5 server, I see two way :
>
>   first :
>
>     Server 1 : MDS,MON, grafana, prometheus, webui
>     Server 2:  MON
>     Server 3:  MON
>     Server 4 : MDS
>     Server 5 : MDS
>
>   so 3 MDS, 3 MON. and we can loose 2 servers.
>
>   Second
>
>     KVM on each server
>       Server 1 : 3 VM : One for grafana & CIe, and 1 MDS, 2 MON
>       other server : 1 MDS, 1 MON
>
>   in total :  5 MDS, 5 MON and we can loose 4 servers.
>
> So on paper it's seem the second are smarter, but it's also more complex,
> so my question are «is it worth the complexity to have 5 MDS/MON for 300
> OSD».
>
> Important : The main goal of this ceph cluster are not to get the maximum
> I/O speed, I would not say the speed is not a factor, but it's not the main
> point.
>
> Regards.
>
>
> --
> Albert SHIH 🦫 🐸
> Observatoire de Paris
> France
> Heure locale/Local time:
> ven. 17 nov. 2023 10:49:27 CET
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux