Re: GPT partition tables and cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/8/21 9:39 PM, Sebastian Wagner wrote:
> Hi Guillaume, Tim and Adam,
> 
> following up on
> https://github.com/ceph/ceph/pull/44012#discussion_r758377347 here on
> dev@. How do we properly handle GPT tables on devices that are showing
> up on ceph-volume inventory?
> 
> TLDR: The current behavior of cephadm is to create OSDs on drives with a
> GPT partition table, which fails as this drive was used by the host OS.
> This clearly doesn't work as we need to find a better way.
> 
> Tim, you mentioned that In general ignoring unavailable devices seems to
> be a bad idea.

If we ignore all unavailable devices, then we have a problem with OSDs
with separate db/wal.  The shared db/wal devices are "unavailable" (once
at least one OSD is deployed), but they still need to be passed in to
ceph-volume when creating new OSDs, because ceph-volume is somehow smart
enough to know to put the db/wal on those devices when deploying
additional OSDs, or replacing existing OSDs.  Ignoring unavailable
devices breaks this behaviour (at least, that's the experience we had
with DeepSea).

> My idea was to hide drives with GPT tables from
> ceph-volume inventory, but rethinking this, I now think this won't be a
> good idea as well.

I don't think we should hide anything from `ceph-volume inventory`,
because IMO that should tell us everything that's physically present,
whether or not we're able to use it.

> 
> There is still the possibility to add a special case for GPT tables in
> cephadm, but adding a special case in cephadmn still feels wrong to me.
> Are there any other alternatives?

FWIW, that's the approach DeepSea takes, when filtering through the
disks to (eventually) generate a `ceph-volume lvm batch` invocation to
create OSDs:

https://github.com/SUSE/DeepSea/blob/1034c3803705706e5d5362cdc7b9787a6b88c17d/srv/salt/_modules/dg.py#L787

Maybe an alternative is to have `ceph-volume lvm batch` ignore GPT disks
if they're passed in there...?

Regards,

Tim
-- 
Tim Serong
Senior Clustering Engineer
SUSE
tserong@xxxxxxxx

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux