Re: defaults paths #2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 9, 2012 at 11:16, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> One thing we need to keep in mind here is that the individual disks are
> placed in the CRUSH hierarchy based on the host/rack/etc location in the
> datacenter.  Moving disk around arbitrarily will break the placement
> constraints if that position isn't also changed.

Yeah, the location will have to be updated. I tend to think disks
*will* move, and it's better to cope with it than to think it won't
happen. All you need is a simple power supply/mobo/raid
controller/nic/etc failure, if there's any free slots anywhere it's
probably better to plug the disks in there than waiting for a
replacement part. I'm working under the assumption that it's better to
"just bring them up" rather than having an extended osd outage or
claiming the osd as lost.

Updating the new location for the osd could be something we do even at
every osd start -- it's a nop if the location is the same as the old
one. And we can say the host knows where it is, and that information
is available in /etc or /var/lib/ceph.

I'll come back to this once it's a little bit more concrete; I'd
rather not make speculative changes, until I can actual trigger the
behavior in a test bench.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux