You should take into account that the cluster is full at a utilization of 95% and no more client requests can be processed. 75% you will see a near_full warning in the ceph status. joachim.kraftmayer@xxxxxxxxx www.clyso.com Hohenzollernstr. 27, 80801 Munich Utting | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE275430677 Am Mo., 18. Nov. 2024 um 14:07 Uhr schrieb Marc <Marc@xxxxxxxxxxxxxxxxx>: > Thanks! Very good links! :) > > I need to subtract from the usable capacity max usable/server count to > handle 1 server failure. Anything else I need to subtract? > > > > > > https://docs.clyso.com/tools/erasure-coding-calculator/ > > > > > > > > Am Sa., 16. Nov. 2024 um 10:04 Uhr schrieb Marc Schoechlin > > <ms@xxxxxxxxxx>: > > > > > Hi Anthony, > > > > > > this is a nice one! The original mail had a broken link :-) > > > https://www.osnexus.com/ceph-designer > > > > > > Perhaps this would also be a good project to build an open platform > > that > > > helps to do a dteailed planning of Ceph clusters in general (server > > and > > > switch hardware, different types of pools, service distribution, > > placement > > > groups, ....). > > > Especially if it also had the option of using or creating hardware > > > profiles (that the user creates himself or that interested hardware > > > manufacturers can create) from different server and switch > > manufacturers > > > and comparing them in terms of costs, energy consumption, throughput > > and > > > usable memory. > > > > > > Regards > > > Marc > > > > > > Am 15.11.24 um 16:55 schrieb Anthony D'Atri: > > > > https://www.osnexus.com/ceph-designer; > > > > > > > >> On Nov 15, 2024, at 10:51 AM, Marc<Marc@xxxxxxxxxxxxxxxxx> wrote: > > > >> > > > >> > > > >> I was wondering if there is some online tool that can help you with > > > calculating usable storage from nodes, drives per node, > > replication/erasure > > > used etc. > > > >> > > > >> > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx