Re: Can't add OSD id in manual deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Usually it should also accept the device path (although I haven't tried that in Octopus yet), you could try `ceph-volume lvm prepare --data /path/to/device` first and then activate it. If that doesn't work, try to create a vg and lv and try it with LVM syntax (ceph-volume lvm prepare --data {vg}/{lv}). I don't have a cluster at hand right now so I can't double check. But I find it strange that it doesn't accept the device path, maybe someone with more Octopus experience can chime in.


Zitat von Joshua Schaeffer <jschaeffer@xxxxxxxxxxxxxxx>:

On 8/14/20 10:57 AM, Eugen Block wrote:
I didn't notice that. Have you tried this multiple times with the same disk? Do you see any other error messages in syslog?
Thanks Eugen for your fast response. Yes, I have tried it multiple times, but I'm trying again right now just to be sure the outcome is the same.

- ceph.log and ceph-mgr.node1.log don't have much.
- syslog itself doesn't show anything Ceph related.
- ceph-volume logs show the following:

[2020-08-14 17:15:02,479][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2020-08-14 17:15:02,661][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdc [2020-08-14 17:15:02,674][ceph_volume.process][INFO  ] stdout NAME="sdc" KNAME="sdc" MAJ:MIN="8:32" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="LOGICAL_VOLUME" SIZE="68.3G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2020-08-14 17:15:02,675][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/sdc [2020-08-14 17:15:02,696][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdc [2020-08-14 17:15:02,824][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sdc". [2020-08-14 17:15:02,826][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdc [2020-08-14 17:15:02,905][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdc: (2) No such file or directory [2020-08-14 17:15:02,907][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdc [2020-08-14 17:15:02,985][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdc: (2) No such file or directory [2020-08-14 17:15:02,987][ceph_volume.process][INFO  ] Running command: /usr/bin/udevadm info --query=property /dev/sdc [2020-08-14 17:15:02,997][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:40/0000:40:11.0/0000:48:00.0/host4/target4:1:0/4:1:0:2/block/sdc [2020-08-14 17:15:02,998][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sdc
[2020-08-14 17:15:02,998][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-08-14 17:15:02,998][ceph_volume.process][INFO  ] stdout MAJOR=8
[2020-08-14 17:15:02,998][ceph_volume.process][INFO  ] stdout MINOR=32
[2020-08-14 17:15:02,998][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-08-14 17:15:02,999][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=2653962
[2020-08-14 17:15:02,999][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2020-08-14 17:15:02,999][ceph_volume.process][INFO  ] stdout ID_VENDOR=HP
[2020-08-14 17:15:02,999][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=HP\x20\x20\x20\x20\x20\x20 [2020-08-14 17:15:02,999][ceph_volume.process][INFO  ] stdout ID_MODEL=LOGICAL_VOLUME [2020-08-14 17:15:03,000][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=LOGICAL\x20VOLUME\x20\x20 [2020-08-14 17:15:03,000][ceph_volume.process][INFO  ] stdout ID_REVISION=3.00
[2020-08-14 17:15:03,000][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2020-08-14 17:15:03,000][ceph_volume.process][INFO  ] stdout ID_SERIAL=3600508b1001039565a35315242571100 [2020-08-14 17:15:03,000][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=600508b1001039565a35315242571100 [2020-08-14 17:15:03,001][ceph_volume.process][INFO  ] stdout ID_WWN=0x600508b100103956 [2020-08-14 17:15:03,001][ceph_volume.process][INFO  ] stdout ID_WWN_VENDOR_EXTENSION=0x5a35315242571100 [2020-08-14 17:15:03,001][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x600508b1001039565a35315242571100 [2020-08-14 17:15:03,001][ceph_volume.process][INFO  ] stdout ID_SCSI_SERIAL=PACCR0M9VZ51RBW
[2020-08-14 17:15:03,001][ceph_volume.process][INFO  ] stdout ID_BUS=scsi
[2020-08-14 17:15:03,002][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:48:00.0-scsi-0:1:0:2 [2020-08-14 17:15:03,002][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_48_00_0-scsi-0_1_0_2 [2020-08-14 17:15:03,002][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-path/pci-0000:48:00.0-scsi-0:1:0:2 /dev/disk/by-id/wwn-0x600508b1001039565a35315242571100 /dev/disk/by-id/scsi-3600508b1001039565a35315242571100
[2020-08-14 17:15:03,002][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-08-14 17:15:03,004][ceph_volume.api.lvm][WARNING] device is not part of ceph: None [2020-08-14 17:15:03,005][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-authtool --gen-print-key [2020-08-14 17:15:03,059][ceph_volume.process][INFO  ] stdout AQCXxjZf5glOAxAAWBFST6v49TlLUGlENabhUw== [2020-08-14 17:15:03,061][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 65550aaa-e137-42ea-b16f-dcec5fb15406

It looks like it is probing the disk for LVM information. Do I need to manually prepare this disk first with LVM before I can use it? Currently the disk should be clean. Any prior filesystems should be wiped completely, no RAID, GRUB, or boot partitions on it either.

--
Thanks,
Joshua Schaeffer

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux