Hi,
the OSDs log into the journal, so you should be able to capture the
logs during startup with 'journalctl -fu
ceph-<FSID>@osd.<OSD>.service' or check after the failure with
'journalctl -u ceph-<FSID>@osd.<OSD>.service'.
Zitat von 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx:
Hello,
I've bootstrapped a new cephadm cluster but I am unable to create
any working OSDs. I have also been unable to find relevant logs to
figure out what is going wrong.
I've tried to add disks individually (ceph orch daemon add <host>
<dev>), using the gui and selecting a model filter, using cli with a
yaml file. In all of the cases, an OSD daemon is created, the disk
is prepared (with LVM and labelled as OSD.x), a systemd service is
created, the OSD is marked as in, but never comes up. After 600 sec
the OSD is marked also as out.
The systemctl status and journalctl -xe just tells me Failed with
result exit code.
I've tried to find any relevant logs to explain what is preventing
the disk from coming up. I've enabled logging to file at INFO level,
but there is so much in the logs and I don't know what could be
relevant.
When it fails, I don't have any real problems deleting the daemon
and running cephadm ceph-volume lvm zap /dev/sdd --destroy, leaving
the disk in a clean state (allowing it to automatically be picked up
when using the orch). Currently I've pulled out all but one disk.
Further information that could be relevant:
1. I'm running the cluster on a proxmox node
2. The node boot disks are running zfs in raid1 configuration
3. The disks are attached through an external sas enclosure, but
the disks are sata (as mentioned, everything seems to work well with
creating the lvm, with or without encryption, the only strange thing
is that smart values don't seem to be available).
Any suggestions as to how to find out what's wrong?
Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx