Hi,
Another question is why “ ceph orch ls osd” reports in the RUNNING
column the value x/24, why 24?
can you share your 'ceph osd tree' and maybe also 'ceph -s'? I would
assume that you have a few dead or down OSDs, but it's hard to tell.
1/ see which disk are in each OSD service_id?
You can see that in the output of
cephadm ceph-volume lvm list
For example, I have two different services for OSDs, one "default" and
the other "ebl-ssd":
host1:~ # ceph orch ls osd
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE
NAME IMAGE ID
osd.default 4/4 3m ago 4h host2;host1
registry.example.com:5000/ceph/ceph:15.2.14 d0593fa115c1
osd.ebl-ssd 4/8 3m ago 5m host3
registry.example.com:5000/ceph/ceph:15.2.14 d0593fa115c1
And then on host3 I can check the "osdspec affinity":
---snip---
====== osd.5 =======
[block]
/dev/ceph-37d08141-5d82-4243-a042-c2e116675945/osd-block-9e239c9d-b424-4708-af23-d63e59ac9443
block device
/dev/ceph-37d08141-5d82-4243-a042-c2e116675945/osd-block-9e239c9d-b424-4708-af23-d63e59ac9443
block uuid rfjQ86-WABc-VWyu-3LHH-kLVD-yCPr-cxAVOI
cephx lockbox secret
cluster fsid 3a5b8c92-43ab-11ec-9a77-fa163e672db2
cluster name ceph
crush device class None
db device
/dev/ceph-46b5a46e-f3b9-4fa9-bdb9-26c00d03be61/osd-db-e4e2daea-7316-4700-aea6-3c7b9ed5c3df
db uuid Mq9WeM-b7BV-Kc8d-B3Wm-OmXI-BceW-LNgukR
encrypted 0
osd fsid 9e239c9d-b424-4708-af23-d63e59ac9443
osd id 5
osdspec affinity ebl-ssd
type block
vdo 0
devices /dev/vdd
[db]
/dev/ceph-46b5a46e-f3b9-4fa9-bdb9-26c00d03be61/osd-db-e4e2daea-7316-4700-aea6-3c7b9ed5c3df
block device
/dev/ceph-37d08141-5d82-4243-a042-c2e116675945/osd-block-9e239c9d-b424-4708-af23-d63e59ac9443
block uuid rfjQ86-WABc-VWyu-3LHH-kLVD-yCPr-cxAVOI
cephx lockbox secret
cluster fsid 3a5b8c92-43ab-11ec-9a77-fa163e672db2
cluster name ceph
crush device class None
db device
/dev/ceph-46b5a46e-f3b9-4fa9-bdb9-26c00d03be61/osd-db-e4e2daea-7316-4700-aea6-3c7b9ed5c3df
db uuid Mq9WeM-b7BV-Kc8d-B3Wm-OmXI-BceW-LNgukR
encrypted 0
osd fsid 9e239c9d-b424-4708-af23-d63e59ac9443
osd id 5
osdspec affinity ebl-ssd
type db
vdo 0
devices /dev/vdb
---snip---
Here's a different OSD from the default service:
---snip---
====== osd.1 =======
[db]
/dev/ceph-2d784381-3df3-4743-b73f-43a3e4932ab2/osd-db-d05e67a3-dced-49f3-8005-94efeb3e3a54
block device
/dev/ceph-956cf339-ad4a-4510-b962-fd559f2d8440/osd-block-7eeefd15-535c-470f-90a1-432f10554082
...
osdspec affinity default
type db
vdo 0
devices /dev/vdb
[block]
/dev/ceph-956cf339-ad4a-4510-b962-fd559f2d8440/osd-block-7eeefd15-535c-470f-90a1-432f10554082
block device
/dev/ceph-956cf339-ad4a-4510-b962-fd559f2d8440/osd-block-7eeefd15-535c-470f-90a1-432f10554082
...
osdspec affinity default
type block
vdo 0
devices /dev/vdd
---snip---
Regards,
Eugen
Zitat von "[AR] Guillaume CephML" <gdelafond+cephml@xxxxxxxxxxx>:
Hello,
I got something strange on a Pacific (16.2.6) cluster.
I have added 8 new empty spinning disk on this running cluster that
is configured with:
# ceph orch ls osd --export
service_type: osd
service_id: ar_osd_hdd_spec
service_name: osd.ar_osd_hdd_spec
placement:
host_pattern: '*'
spec:
data_devices:
rotational: 1
filter_logic: AND
objectstore: bluestore
---
service_type: osd
service_id: ar_osd_ssd_spec
service_name: osd.ar_osd_ssd_spec
placement:
host_pattern: '*'
spec:
data_devices:
rotational: 0
filter_logic: AND
objectstore: bluestore
Before adding them I had:
# ceph orch ls osd
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd.ar_osd_hdd_spec 16/24 8m ago 4M *
osd.ar_osd_ssd_spec 8/16 8m ago 4M *
After adding the disk I have:
# ceph orch ls osd
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd.ar_osd_hdd_spec 16/24 8m ago 4M *
osd.ar_osd_ssd_spec 16/24 8m ago 4M *
I do not understand why the disk have been detected as osd.ar_osd_ssd_spec.
New disk are on /dev/sdf.
# ceph orch device ls —wide
Hostname Path Type Transport RPM Vendor Model
Size Health Ident Fault Avail Reject Reasons
host10 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host10 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host10 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host10 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host11 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host11 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host11 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host11 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host12 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host12 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host12 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host12 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host13 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host13 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host13 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host13 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host14 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host14 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host14 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host14 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host15 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host15 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host15 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host15 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host16 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host16 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host16 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host16 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host17 /dev/sdc ssd ATA/SATA Unknown ATA
Micron_5300_MTFD 960G Good N/A N/A No Insufficient
space (<10 extents) on vgs, LVM detected, locked
host17 /dev/sdd hdd ATA/SATA 7200 ATA HGST
HUH721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host17 /dev/sde hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
host17 /dev/sdf hdd ATA/SATA 7200 ATA WDC
WUS721010AL 10.0T Good N/A N/A No Insufficient space
(<10 extents) on vgs, LVM detected, locked
# for f in /sys/block/sd[cdef]/queue/rotational; do printf "$f is ";
cat $f; done
/sys/block/sdc/queue/rotational is 0
/sys/block/sdd/queue/rotational is 1
/sys/block/sde/queue/rotational is 1
/sys/block/sdf/queue/rotational is 1
Is there a way to :
1/ see which disk are in each OSD service_id?
2/ move a disk from one service_id to another one?
Another question is why “ ceph orch ls osd” reports in the RUNNING
column the value x/24, why 24?
Each server has (8 servers in the cluster):
# ceph-volume inventory
Device Path Size rotates available Model name
/dev/sda 59.00 GB False False SuperMicro SSD
/dev/sdb 59.00 GB False False SuperMicro SSD
/dev/sdc 894.25 GB False False Micron_5300_MTFD
/dev/sdd 9.10 TB True False HGST HUH721010AL
/dev/sde 9.10 TB True False WDC WUS721010AL
/dev/sdf 9.10 TB True False WDC WUS721010AL
PS: of course this is not a big problem as the 2 specs are equal,
but I did not understand why it did that
PS2: on another ceph 16.2.6 cluster that have the same service_spec,
we did not get the same strange thing: the disk have been linked to
the right service_spec.
Thank you,
--
Guillaume de Lafond
Aqua Ray
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx