Re: ceph cluster planning size / disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 
> Thanks! Very good links! :)
> 
> I need to subtract from the usable capacity max usable/server count to handle 1 server failure. Anything else I need to subtract?

That buffer for server failure recovery is a good idea and often missed.  These days though Ceph is pretty good at detecting that an entire node is down, and with a judicious mon_osd_down_out_subtree_limit one can usually avoid a thundering herd if the node can be brought back up within a short period of time.  That said, it’s still good to have that margin in case it takes a long time.  The backfillfull/full ratios can be squeezed higher in a pinch, but that can be risky.


> 
> 
>> 
>> https://docs.clyso.com/tools/erasure-coding-calculator/
>> 
>> 
>> 
>> Am Sa., 16. Nov. 2024 um 10:04 Uhr schrieb Marc Schoechlin
>> <ms@xxxxxxxxxx>:
>> 
>>> Hi Anthony,
>>> 
>>> this is a nice one! The original mail had a broken link :-)
>>> https://www.osnexus.com/ceph-designer
>>> 

Curious.  Apple Mail has been weird on me lately.


>>> Perhaps this would also be a good project to build an open platform
>> that
>>> helps to do a dteailed planning of Ceph clusters in general (server
>> and
>>> switch hardware, different types of pools, service distribution,
>> placement
>>> groups, ....).
>>> Especially if it also had the option of using or creating hardware
>>> profiles (that the user creates himself or that interested hardware
>>> manufacturers can create) from different server and switch
>> manufacturers
>>> and comparing them in terms of costs, energy consumption, throughput
>> and
>>> usable memory.
>>> 
>>> Regards
>>> Marc
>>> 
>>> Am 15.11.24 um 16:55 schrieb Anthony D'Atri:
>>>> https://www.osnexus.com/ceph-designer;
>>>> 
>>>>> On Nov 15, 2024, at 10:51 AM, Marc<Marc@xxxxxxxxxxxxxxxxx> wrote:
>>>>> 
>>>>> 
>>>>> I was wondering if there is some online tool that can help you with
>>> calculating usable storage from nodes, drives per node,
>> replication/erasure
>>> used etc.
>>>>> 
>>>>> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux