Re: Shall host weight auto reduce on hdd failure?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 4 dec. 2019 kl 01:37 skrev Milan Kupcevic <milan_kupcevic@xxxxxxxxxxx>:
This cluster can handle this case at this moment as it has got plenty of
free space. I wonder how is this going to play out when we get to 90% of
usage on the whole cluster. A single backplane failure in a node takes

You should not run any file storage system to 90% full, ceph or otherwise.

You should set a target for how full it can get before you must add new hardware to it, be it more drives or hosts with drives, and as noted below, you should probably include at least one failed node into this calculation, so that planned maintenance doesn't become a critical situation. This means that in terms or raw disk space, the cluster should probably be aiming for at most 50-60% usage, until it gets large in terms of number of hosts, and upto that point, aim for having more resources added when it hits 70% or something like that. (perhaps something simple as 'start planning expansion at 50%, get delivery before 75%')

Hopefully, when building storage clusters, the raw disk space should be one of the cheaper resources to expand, compared to network, power, rack space, admin time/salaries and all that.
 
four drives out at once; that is 30% of storage space on a node. The
whole cluster would have enough space to host the failed placement
groups but one node would not.


--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux