Re: Issues with new cephadm cluster <solved>

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thank you for the assistance - those were the logs I had been looking at but because I wasn't sure I was looking at so many other logs and kept looking for more.

The actual logs in themselves were not very clear though. I did find this as the first line indicating a problem
missing 'type' file and unable to infer osd type

With that I was finally able to locate this bug-report https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747 with the name "cephadm does not work with zfs root". It's true, my proxmox server had been installed with a zfs root. Apparently there is an issue with the use of tempfs, used by cephadm, when mounted on a zfs root. 

I again wiped the server and reinstalled it with ext4 - which resolved the issue.

There is a (very) recent ceph PR (https://github.com/ceph/ceph/pull/46043) to resolve the issue and it refers also to a ceph tracker issue (https://tracker.ceph.com/issues/55496)

Many thanks for the assistance that ended up pointing me in the right direction!

-----Original Message-----
From: Eugen Block 'eblock at nde.ag' <7ba335c6-fb20-4041-8c18-1b00efb7824c+eblock=nde.ag@xxxxxxxxxxx> 
Sent: 04 May 2022 09:09
To: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
Subject:  Re: Issues with new cephadm cluster

Hi,

the OSDs log into the journal, so you should be able to capture the logs during startup with 'journalctl -fu ceph-<FSID>@osd.<OSD>.service' or check after the failure with 'journalctl -u ceph-<FSID>@osd.<OSD>.service'.


Zitat von 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx:

> Hello,
>
> I've bootstrapped a new cephadm cluster but I am unable to create any 
> working OSDs. I have also been unable to find relevant logs to figure 
> out what is going wrong.
>
> I've tried to add disks individually (ceph orch daemon add <host> 
> <dev>), using the gui and selecting a model filter, using cli with a 
> yaml file. In all of the cases, an OSD daemon is created, the disk is 
> prepared (with LVM and labelled as OSD.x), a systemd service is 
> created, the OSD is marked as in, but never comes up. After 600 sec 
> the OSD is marked also as out.
>
> The systemctl status and journalctl -xe just tells me Failed with 
> result exit code.
>
> I've tried to find any relevant logs to explain what is preventing the 
> disk from coming up. I've enabled logging to file at INFO level, but 
> there is so much in the logs and I don't know what could be relevant.
>
> When it fails, I don't have any real problems deleting the daemon and 
> running cephadm ceph-volume lvm zap /dev/sdd --destroy, leaving the 
> disk in a clean state (allowing it to automatically be picked up when 
> using the orch). Currently I've pulled out all but one disk.
>
> Further information that could be relevant:
>
>   1.  I'm running the cluster on a proxmox node
>   2.  The node boot disks are running zfs in raid1 configuration
>   3.  The disks are attached through an external sas enclosure, but 
> the disks are sata (as mentioned, everything seems to work well with 
> creating the lvm, with or without encryption, the only strange thing 
> is that smart values don't seem to be available).
>
> Any suggestions as to how to find out what's wrong?
>
> Thanks!
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx



-----

This email was sent to 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx (ceph user) from eblock@xxxxxx and has been forwarded by AnonAddy.
To deactivate this alias copy and paste the url below into your web browser.

https://app.anonaddy.com/deactivate/7ba335c6-fb20-4041-8c18-1b00efb7824c?signature=bac13a3b3016883e2bb0736418346fb1fed5f22615ec0c87dfe8d8f903e4eb3d


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux