Hi,
is the cluster healthy? Sometimes a degraded state prevents the
orchestrator from doing its work. Then I would fail the mgr (ceph mgr
fail), this seems to be necessary lots of times. Then keep an eye on
the active mgr log as well as the cephadm.log locally on the host
where the OSDs need to be created.
Regards,
Eugen
Zitat von ceph@xxxxxxxxxxxxxxx:
I'm trying to add a new storage host into a Ceph cluster (quincy
17.2.6). The machine has boot drives, one free SSD and 10 HDDs. The
plan is to have each HDD be an OSD with a DB on a equal size lvm of
the SDD. This machine is newer but otherwise similar to other
machines already in the cluster that are setup and running the same
way. But I've been unable to add OSDs and unable to figure out why,
or fix it. I have some experience, but I'm not an expert and could
be missing something obvious. If anyone has any suggestions, I would
appreciate it.
I've tried to add OSDs a couple different ways.
Via the dashboard, this has worked fine for previous machines. And
it appears to succeed and gives no errors that I can find looking in
/var/log/ceph and dashboard logs. The OSDs are never created. In
fact, the drives still show up as available in Physical Disks and I
can do the same creation procedure repeatedly.
I've tried creating it in cephadm shell with the following, which
has also worked in the past:
ceph orch daemon add osd
stor04.fqdn:data_devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,db_devices=/dev/sda,osds_per_device=1
The command just hangs. Again I wasn't able to find any obvious
errors. Although this one did seem to cause some slow op errors from
the monitors that required restarting a monitor. And it could cause
errors with the dashboard locking up and having to restart the
manager as well.
And I've tried setting 'ceph orch apply osd --all-available-devices
--unmanaged=false' to let Ceph automatically add the drives. In the
past, this would cause Ceph to automatically add the drives as OSDs
but without having associated DBs on the SSD. The SSD would just be
another OSD. This time it appears to have no affect and similar to
the above, I wasn't able to find any obvious error feedback.
-Mike
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx