On 26.05.2021 11:16, Eugen Block wrote:
Yes, the LVs are not removed automatically, you need to free up the
VG, there are a couple of ways to do so, for example remotely:
pacific1:~ # ceph orch device zap pacific4 /dev/vdb --force
or directly on the host with:
pacific1:~ # cephadm ceph-volume lvm zap --destroy
/dev/<CEPH_VG>/<DB_LV>
Thanks,
I used the cephadm command and deleted the LV and the VG now has free
space
# vgs | egrep "VG|dbs"
VG #PV #LV #SN Attr
VSize VFree
ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b 3 14 0 wz--n-
<5.24t 357.74g
But it doesn't seams to be able to use it, because it can find anyting
# ceph orch apply osd -i hdd.yml --dry-run
################
OSDSPEC PREVIEWS
################
+---------+------+-------------+----------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+-------------+----------+----+-----+
+---------+------+-------------+----------+----+-----+
I tried adding size as you have in your configuration
db_devices:
rotational: 0
size: '30G:'
Still it was unable to create the OSD.
If I removed the : so it is 30GB exact size, it did find the disk, but
DB is not placed on a SSD since I do not have one with 30 GB exact size
################
OSDSPEC PREVIEWS
################
+---------+------+-------------+----------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+-------------+----------+----+-----+
|osd |hdd |pech-hd-7 |/dev/sdt |- |- |
+---------+------+-------------+----------+----+-----+
To me I looks like Cephadm can't use/find the free space on the VG and
use that as a new LV for the OSD.
--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx