Hi,
The VG has 357.74GB of free space of total 5.24TB so I did actually
tried different values like "30G:", "30G", "300G:", "300G", "357G".
I also tied some crazy high numbers and some ranges, but don't
remember the values. But none of them worked.
the size parameter is filtering the disk size, not the size you want
the db to have (that's block_db_size). Your SSD disk size is 1.8 TB so
your specs could look something like this:
block_db_size: 360G
data_devices:
size: "12T:"
rotational: 1
db_devices:
size: ":2T"
rotational: 0
filter_logic: AND
...
But I was under the impression that this all should of course work
with just the rotational flags, I'm confused that it doesn't. Can you
try with these specs to see if you get the OSD deployed? I'll try
again with Octopus to see if I see similar behaviour.
Zitat von Kai Stian Olstad <ceph+list@xxxxxxxxxx>:
On 26.05.2021 18:12, Eugen Block wrote:
Could you share the output of
lsblk -o name,rota,size,type
from the affected osd node?
# lsblk -o name,rota,size,type
NAME
ROTA SIZE TYPE
loop0
1 71.3M loop
loop1
1 55M loop
loop2
1 29.9M loop
sda
1 223G disk
├─sda1
1 512M part
├─sda2
1 1G part
└─sda3
1 221.5G part
sdb
1 12.5T disk
└─ceph--block--1b5ad7e7--2e24--4315--8a05--7439ab782b45-osd--block--2da790bc--a74c--41da--8772--3b8aac77001c 1 12.5T
lvm
sdc
1 12.5T disk
└─ceph--block--44ae73e8--726f--4556--978c--8e7d6570c867-osd--block--daeb5218--c10c--45e1--a864--2d60de44e594 1 12.5T
lvm
sdd
1 12.5T disk
└─ceph--block--38e361f5--257f--47a5--85dc--16dbdd5fb905-osd--block--a3e1511f--8644--4c1e--a3dd--f365fcb27fc6 1 12.5T
lvm
sde
0 1.8T disk
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--f5d822a2--d42a--4a2c--985f--c65977f4d020 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--9ed8d8a4--11bc--4669--bb8f--d66806181a7d 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--0fdff890--f5e4--4762--83f6--8d02ee63c399 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--facaa59d--1f44--4605--bdb8--5a7d58271323 0 357.7G
lvm
└─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--44fbc70f--4169--41aa--9c2f--6638b226065a 0 357.7G
lvm
sdf
0 1.8T disk
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--7c82d5e2--6ce1--49c6--9691--2e156a8fd9c0 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--135169cd--e0c8--4949--a137--c5e7da12bc52 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--fcb5b6cc--ee95--4dfa--a978--0a768da8bc66 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--44b74f22--d2f1--485a--971b--d89a82849c6e 0 357.7G
lvm
└─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--1dd5ee57--e141--40aa--b4a6--5cb20dc51cc0 0 357.7G
lvm
sdg
0 1.8T disk
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--274c882c--da8b--4251--a195--309ee9cbc36f 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--087b2305--2bbe--4583--b1ca--dda7416efefc 0 357.7G
lvm
├─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--3a97a94b--92bd--41d9--808a--b79aa210fe11 0 357.7G
lvm
└─ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--441521b4--52ce--4a03--b5b2--da9392d037bc 0 357.7G
lvm
sdi
1 12.5T disk
└─ceph--block--1c122cd0--b28a--4409--b17b--535d95029dda-osd--block--9a6456d9--4c23--4585--a6df--85eda27ae651 1 12.5T
lvm
sdj
1 12.5T disk
└─ceph--block--4cbe20eb--6aa3--4e4a--bdda--d556144e2a83-osd--block--2fd1bc6c--0eb3--48aa--b495--db0423b5be28 1 12.5T
lvm
sdk
1 12.5T disk
└─ceph--block--3a5c3abd--77f6--4823--b313--1f5900470b8f-osd--block--c350620a--32f5--4da1--bda2--423fe3644f17 1 12.5T
lvm
sdl
1 12.5T disk
└─ceph--block--6c7d3bea--a19c--4efc--9a36--dc38a06dad9c-osd--block--f23293d3--4ef8--4127--92c3--7235055797df 1 12.5T
lvm
sdm
1 12.5T disk
└─ceph--block--7b48b8d4--cca2--4600--a046--7e1bbc03e1d7-osd--block--bfe62b63--f4fc--4f9d--9bc3--028cba0bacc6 1 12.5T
lvm
sdn
1 12.5T disk
└─ceph--block--b99edf97--4162--43e6--ae2e--143250e4c4a8-osd--block--c6cd1240--a171--4ac8--8db4--e282d6fd63c3 1 12.5T
lvm
sdo
1 12.5T disk
└─ceph--block--5c324c26--f569--4073--8915--b242a6b56e1c-osd--block--9ab9a615--a87b--40b3--90f1--660fa0da5c93 1 12.5T
lvm
sdp
1 12.5T disk
└─ceph--block--2ae62d0b--5f71--446d--b1a9--083842227895-osd--block--6f80035f--59b3--4740--9504--c63be09201f6 1 12.5T
lvm
sdq
1 12.5T disk
└─ceph--block--76711aee--28a0--4e19--a4da--74492ab56b77-osd--block--f587fb82--30b9--4133--a000--1b7f649c92aa 1 12.5T
lvm
sdr
1 12.5T disk
└─ceph--block--bd6f560d--f196--448b--8bc0--840ea381e798-osd--block--37cc1a13--9f4b--4593--85ef--829b33f4c82f 1 12.5T
lvm
sds
1 12.5T disk
└─ceph--block--f6826b43--6594--4b1c--ba0d--654d3075656a-osd--block--16b26a2d--8a7b--4c60--abf0--5412d8da1446 1 12.5T
lvm
sdt
1 12.5T disk
My spec file is for a tiny lab cluster, in your case the db drive size
should be something like '5T:6T' to specify a range.
How large are the HDDs?
The VG has 357.74GB of free space of total 5.24TB so I did actually
tried different values like "30G:", "30G", "300G:", "300G", "357G".
I also tied some crazy high numbers and some ranges, but don't
remember the values. But none of them worked.
The HDD is 15x14TB and SSD 3x1.92TB.
As you can see from lsblk or pvs under, Cephadm created one VG
ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b from the 3 SSD
and created 15 LV on that VG, one for each HDD.
It only has 14 LV now since I zap the LV of the disk that died, the
new disk is /dev/sdt.
Here is also the output of pvs
# pvs
PV VG Fmt
Attr PSize PFree
/dev/sdb ceph-block-1b5ad7e7-2e24-4315-8a05-7439ab782b45
lvm2 a-- 12.47t 0
/dev/sdc ceph-block-44ae73e8-726f-4556-978c-8e7d6570c867
lvm2 a-- 12.47t 0
/dev/sdd ceph-block-38e361f5-257f-47a5-85dc-16dbdd5fb905
lvm2 a-- 12.47t 0
/dev/sde ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b
lvm2 a-- <1.75t 16.00m
/dev/sdf ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b
lvm2 a-- <1.75t 16.00m
/dev/sdg ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b
lvm2 a-- <1.75t 357.71g
/dev/sdi ceph-block-1c122cd0-b28a-4409-b17b-535d95029dda
lvm2 a-- 12.47t 0
/dev/sdj ceph-block-4cbe20eb-6aa3-4e4a-bdda-d556144e2a83
lvm2 a-- 12.47t 0
/dev/sdk ceph-block-3a5c3abd-77f6-4823-b313-1f5900470b8f
lvm2 a-- 12.47t 0
/dev/sdl ceph-block-6c7d3bea-a19c-4efc-9a36-dc38a06dad9c
lvm2 a-- 12.47t 0
/dev/sdm ceph-block-7b48b8d4-cca2-4600-a046-7e1bbc03e1d7
lvm2 a-- 12.47t 0
/dev/sdn ceph-block-b99edf97-4162-43e6-ae2e-143250e4c4a8
lvm2 a-- 12.47t 0
/dev/sdo ceph-block-5c324c26-f569-4073-8915-b242a6b56e1c
lvm2 a-- 12.47t 0
/dev/sdp ceph-block-2ae62d0b-5f71-446d-b1a9-083842227895
lvm2 a-- 12.47t 0
/dev/sdq ceph-block-76711aee-28a0-4e19-a4da-74492ab56b77
lvm2 a-- 12.47t 0
/dev/sdr ceph-block-bd6f560d-f196-448b-8bc0-840ea381e798
lvm2 a-- 12.47t 0
/dev/sds ceph-block-f6826b43-6594-4b1c-ba0d-654d3075656a
lvm2 a-- 12.47t 0
Also maybe you should use the option
'filter_logic: AND', but I'm not sure if that's already the default, I
remember that there were issues in Nautilus because the default was
OR. I tried this just recently with a version similar to this, I
believe it was 15.2.8 and it worked for me, but again, it's just a
tiny virtual lab cluster.
Yes, AND is default, I tried adding 'filter_logic: AND' but with the
same result.
In you virtual lab cluster do you have multiple HDD sharing the same
SSD as I do?
To me it looks like Cephadm can't find or use the 357.71GB free
space on the VG, it can only find devices that is available.
Here is how my "orch device ls" is for that host
$ ceph orch device ls --wide | egrep "Hostname|hd-7"
Hostname Path Type Vendor Model Size
Available Reject Reasons
pech-hd-7 /dev/sdt hdd WDC WUH721414AL5200 13.7T Yes
pech-hd-7 /dev/sdb hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdc hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdd hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sde ssd SAMSUNG MZILT1T9HAJQ0D3 1920G No
LVM detected, locked
pech-hd-7 /dev/sdf ssd SAMSUNG MZILT1T9HAJQ0D3 1920G No
LVM detected, locked
pech-hd-7 /dev/sdg ssd SAMSUNG MZILT1T9HAJQ0D3 1920G No
LVM detected, locked
pech-hd-7 /dev/sdi hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdj hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdk hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdl hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdm hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdn hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdo hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdp hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdq hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sdr hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
pech-hd-7 /dev/sds hdd SEAGATE ST14000NM0168 13.7T No
Insufficient space (<10 extents) on vgs, LVM detected, locked
--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx