Re: Unable to add new OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I would suggest wiping the disks first with "wipefs -af /dev/_your_disk" or
"sgdisk --zap-all /dev/your_disk" and try again. Try only one disk first.
Is the host visible by running the command: "ceph orch host ls". Is the
FQDN name correct? If so, does the following command return any errors?
"ceph cephadm check-host *<hostname>*"
I don't think this is the case but the disks are visible on the new host
correct? Use "lsblk" or "fdisk -l" commands.

Thank you,
Bogdan Velica
croit.io

On Thu, May 2, 2024 at 7:04 AM <ceph@xxxxxxxxxxxxxxx> wrote:

> I'm trying to add a new storage host into a Ceph cluster (quincy 17.2.6).
> The machine has boot drives, one free SSD and 10 HDDs. The plan is to have
> each HDD be an OSD with a DB on a equal size lvm of the SDD. This machine
> is newer but otherwise similar to other machines already in the cluster
> that are setup and running the same way. But I've been unable to add OSDs
> and unable to figure out why, or fix it. I have some experience, but I'm
> not an expert and could be missing something obvious. If anyone has any
> suggestions, I would appreciate it.
>
> I've tried to add OSDs a couple different ways.
>
> Via the dashboard, this has worked fine for previous machines. And it
> appears to succeed and gives no errors that I can find looking in
> /var/log/ceph and dashboard logs. The OSDs are never created. In fact, the
> drives still show up as available in Physical Disks and I can do the same
> creation procedure repeatedly.
>
> I've tried creating it in cephadm shell with the following, which has also
> worked in the past:
> ceph orch daemon add osd
> stor04.fqdn:data_devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,db_devices=/dev/sda,osds_per_device=1
> The command just hangs. Again I wasn't able to find any obvious errors.
> Although this one did seem to cause some slow op errors from the monitors
> that required restarting a monitor. And it could cause errors with the
> dashboard locking up and having to restart the manager as well.
>
> And I've tried setting 'ceph orch apply osd --all-available-devices
> --unmanaged=false' to let Ceph automatically add the drives. In the past,
> this would cause Ceph to automatically add the drives as OSDs but without
> having associated DBs on the SSD. The SSD would just be another OSD. This
> time it appears to have no affect and similar to the above, I wasn't able
> to find any obvious error feedback.
>
> -Mike
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux