Re: cephadm adopting osd failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

i habe a similar issue. After migration to Cephadm, the  osd services have to be started manually after every cluster reboot.

Marco

> Am 16.04.2020 um 15:11 schrieb bbk@xxxxxxxxxx:
> 
> As i progressed with the migration i found out, that my problem is more of a rare case.
> 
> On my 3 nodes, where i had the problem. I did once move the /var/lib/ceph to a other partition, and symlinked it back. The kernel however is mounting the tempfs at the real path (/whatever/lib/ceph is mounted). I think because of that the cephadm script couldn't unmount correctly.
> 
> On the 2 other nodes, where i didn't hack around, i had no issue.
> 
> But for people having similar problems... after having it migrated, the new service for the osd need to be started manually:
> 
>    systemctl start ceph-$CLUSTERID@osd.$ID
> 
> Yours,
> bbk
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux