Re: Backporting stability fixes for ceph-disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Loic,

Thanks for explaining the differences between Hammer's disk
activations and Jewel's. I think I understand the problem better now.

On Mon, Feb 1, 2016 at 10:53 PM, Loic Dachary <ldachary@xxxxxxxxxx> wrote:
> The conservative approach to the problem would be to cherry-pick what
> we can (
> https://github.com/dachary/ceph/commit/9dce05a8cdfc564c5162885bbb67a04ad7b95c5a
> for instance ) and document known side effects of ceph-disk
> instability so people know it's an annoyance but nothing destructive
> or blocking. In the worst case scenario, deactivating the udev rules
> and running ceph-disk prepare + ceph-disk activate manually or by
> writing a script that does things sequentially is a viable workaround.

This approach (documentation) sounds reasonable to me, and it makes
sense that the larger re-architecture of running "ceph-disk activate"
outside udev is only something that can happen in a major release
boundary (in this case Infernalis / Jewel). Once we're happy that the
docs for manually recovering are solid, we can possibly address it
with a script as you suggest.

If we can document the worst case scenario and what to do when
ceph-disk-in-udev fails, that would really improve the user
experience.

What's the procedure for deactivating the Hammer udev rules, for example?

- Ken
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux