On 12/11/21 1:41 AM, Sebastian Wagner wrote: > > Am 09.12.21 um 07:22 schrieb Tim Serong: >> On 12/8/21 9:39 PM, Sebastian Wagner wrote: >>> Hi Guillaume, Tim and Adam, >>> >>> following up on >>> https://github.com/ceph/ceph/pull/44012#discussion_r758377347 here on >>> dev@. How do we properly handle GPT tables on devices that are showing >>> up on ceph-volume inventory? >>> >>> TLDR: The current behavior of cephadm is to create OSDs on drives with a >>> GPT partition table, which fails as this drive was used by the host OS. >>> This clearly doesn't work as we need to find a better way. >>> >>> Tim, you mentioned that In general ignoring unavailable devices seems to >>> be a bad idea. >> If we ignore all unavailable devices, then we have a problem with OSDs >> with separate db/wal. The shared db/wal devices are "unavailable" (once >> at least one OSD is deployed), but they still need to be passed in to >> ceph-volume when creating new OSDs, because ceph-volume is somehow smart >> enough to know to put the db/wal on those devices when deploying >> additional OSDs, or replacing existing OSDs. Ignoring unavailable >> devices breaks this behaviour (at least, that's the experience we had >> with DeepSea). > What to you think of ignoring all unavailable devices, except those used > for OSDs? My idea is to avoid system partitions, even if they're using a > different table than GPT. Sure, sounds reasonable to me... At least I can't think of a good reason not to do that (ceph-volume wouldn't/shouldn't want to do anything with in-use non-ceph devices anyway). Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@xxxxxxxx _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx