Re: "no valid command found" when running "ceph-deploy osd create"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 2, 2018 at 1:31 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
>
> On Sun, Sep 2, 2018 at 12:00 PM, David Wahler <dwahler@xxxxxxxxx> wrote:
> > Ah, ceph-volume.log pointed out the actual problem:
> >
> > RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv path
> > or an existing device is needed
>
> That is odd, is it possible that the error log wasn't the one that
> matched what you saw on ceph-deploy's end?
>
> Usually ceph-deploy will just receive whatever ceph-volume produced.

I tried again, running ceph-volume directly this time, just to see if
I had mixed anything up. It looks like ceph-deploy is correctly
reporting the output of ceph-volume. The problem is that ceph-volume
only writes the relevant error message to the log file, and not to its
stdout/stderr.

Console output:

rock64@rockpro64-1:~/my-cluster$ sudo ceph-volume --cluster ceph lvm
create --bluestore --data /dev/storage/foobar
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e7dd6d45-b556-461c-bad1-83d98a5a1afa
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1
--yes-i-really-mean-it
 stderr: no valid command found; 10 closest matches:
[...etc...]

ceph-volume.log:

[2018-09-02 18:49:21,415][ceph_volume.main][INFO  ] Running command:
ceph-volume --cluster ceph lvm create --bluestore --data
/dev/storage/foobar
[2018-09-02 18:49:21,423][ceph_volume.process][INFO  ] Running
command: /usr/bin/ceph-authtool --gen-print-key
[2018-09-02 18:49:26,664][ceph_volume.process][INFO  ] stdout
AQCxMIxb+SezJRAAGAP/HHtHLVbciSQnZ/c/qw==
[2018-09-02 18:49:26,668][ceph_volume.process][INFO  ] Running
command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e7dd6d45-b556-461c-bad1-83d98a5a1afa
[2018-09-02 18:49:27,685][ceph_volume.process][INFO  ] stdout 1
[2018-09-02 18:49:27,686][ceph_volume.process][INFO  ] Running
command: /bin/lsblk --nodeps -P -o
NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
/dev/storage/foobar
[2018-09-02 18:49:27,707][ceph_volume.process][INFO  ] stdout
NAME="storage-foobar" KNAME="dm-1" MAJ:MIN="253:1" FSTYPE=""
MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="100G"
STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----"
ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED=""
TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"
PKNAME="" PARTLABEL=""
[2018-09-02 18:49:27,708][ceph_volume.process][INFO  ] Running
command: /bin/lsblk --nodeps -P -o
NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
/dev/storage/foobar
[2018-09-02 18:49:27,720][ceph_volume.process][INFO  ] stdout
NAME="storage-foobar" KNAME="dm-1" MAJ:MIN="253:1" FSTYPE=""
MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="100G"
STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----"
ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED=""
TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"
PKNAME="" PARTLABEL=""
[2018-09-02 18:49:27,720][ceph_volume.devices.lvm.prepare][ERROR ] lvm
prepare was unable to complete
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 216, in safe_prepare
    self.prepare(args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 283, in prepare
    block_lv = self.prepare_device(args.data, 'block', cluster_fsid, osd_fsid)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 206, in prepare_device
    raise RuntimeError(' '.join(error))
RuntimeError: Cannot use device (/dev/storage/foobar). A vg/lv path or
an existing device is needed
[2018-09-02 18:49:27,722][ceph_volume.devices.lvm.prepare][INFO  ]
will rollback OSD ID creation
[2018-09-02 18:49:27,723][ceph_volume.process][INFO  ] Running
command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1
--yes-i-really-mean-it
[2018-09-02 18:49:28,425][ceph_volume.process][INFO  ] stderr no valid
command found; 10 closest matches:
[...etc...]

-- David

> >
> > When I changed "--data /dev/storage/bluestore" to "--data
> > storage/bluestore", everything worked fine.
> >
> > I agree that the ceph-deploy logs are a bit confusing. I submitted a
> > PR to add a brief note to the quick-start guide, in case anyone else
> > makes the same mistake: https://github.com/ceph/ceph/pull/23879
> >
> Thanks for the PR!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux