Re: crushmap errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 11, 2011 at 14:51, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>> 2. Why are 2 racks are not enough for 2 failure domains?
>> From the commit:
>> If there are >2 racks, separate across racks.
>
> Well, technically they are.  My worry is that it's more likely that racks
> will have significantly vary capacity (i.e. crush weight) due to, say, 1
> full rack and a second 1/2 rack.  If the policy forces replicas be placed
> across racks things won't balance well.
>
> I suppose there should be an argument like --min-racks that controls that
> threshold?

In theory the operator can shoot themselves in the foot if they so
please. It seems like a Ceph management console could warn about
"imbalanced crush weight" across racks. This would also allow the
cluster operator to check on their balance over time assuming hardware
gets replaced over time. This could introduce larger rotational hdds,
or smaller ssds, across the cluster.

Kelly
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux