Re: "no valid command found" when running "ceph-deploy osd create"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 2, 2018 at 12:00 PM, David Wahler <dwahler@xxxxxxxxx> wrote:
> Ah, ceph-volume.log pointed out the actual problem:
>
> RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv path
> or an existing device is needed

That is odd, is it possible that the error log wasn't the one that
matched what you saw on ceph-deploy's end?

Usually ceph-deploy will just receive whatever ceph-volume produced.
>
> When I changed "--data /dev/storage/bluestore" to "--data
> storage/bluestore", everything worked fine.
>
> I agree that the ceph-deploy logs are a bit confusing. I submitted a
> PR to add a brief note to the quick-start guide, in case anyone else
> makes the same mistake: https://github.com/ceph/ceph/pull/23879
>
Thanks for the PR!

> Thanks for the assistance!
>
> -- David
>
> On Sun, Sep 2, 2018 at 7:44 AM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
>>
>> There should be useful logs from ceph-volume in
>> /var/log/ceph/ceph-volume.log that might show a bit more here.
>>
>> I would also try the command that fails directly on the server (sans
>> ceph-deploy) to see what is it that is actually failing. Seems like
>> the ceph-deploy log output is a bit out of order (some race condition
>> here maybe)
>>
>>
>> On Sun, Sep 2, 2018 at 2:53 AM, David Wahler <dwahler@xxxxxxxxx> wrote:
>> > Hi all,
>> >
>> > I'm attempting to get a small Mimic cluster running on ARM, starting
>> > with a single node. Since there don't seem to be any Debian ARM64
>> > packages in the official Ceph repository, I had to build from source,
>> > which was fairly straightforward.
>> >
>> > After installing the .deb packages that I built and following the
>> > quick-start guide
>> > (http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/), things
>> > seemed to be working fine at first, but I got this error when
>> > attempting to create an OSD:
>> >
>> > rock64@rockpro64-1:~/my-cluster$ ceph-deploy osd create --data
>> > /dev/storage/bluestore rockpro64-1
>> > [ceph_deploy.conf][DEBUG ] found configuration file at:
>> > /home/rock64/.cephdeploy.conf
>> > [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd
>> > create --data /dev/storage/bluestore rockpro64-1
>> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> > [ceph_deploy.cli][INFO  ]  verbose                       : False
>> > [ceph_deploy.cli][INFO  ]  bluestore                     : None
>> > [ceph_deploy.cli][INFO  ]  cd_conf                       :
>> > <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa9c0f9e0>
>> > [ceph_deploy.cli][INFO  ]  cluster                       : ceph
>> > [ceph_deploy.cli][INFO  ]  fs_type                       : xfs
>> > [ceph_deploy.cli][INFO  ]  block_wal                     : None
>> > [ceph_deploy.cli][INFO  ]  default_release               : False
>> > [ceph_deploy.cli][INFO  ]  username                      : None
>> > [ceph_deploy.cli][INFO  ]  journal                       : None
>> > [ceph_deploy.cli][INFO  ]  subcommand                    : create
>> > [ceph_deploy.cli][INFO  ]  host                          : rockpro64-1
>> > [ceph_deploy.cli][INFO  ]  filestore                     : None
>> > [ceph_deploy.cli][INFO  ]  func                          : <function
>> > osd at 0x7fa9ca0c80>
>> > [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
>> > [ceph_deploy.cli][INFO  ]  zap_disk                      : False
>> > [ceph_deploy.cli][INFO  ]  data                          :
>> > /dev/storage/bluestore
>> > [ceph_deploy.cli][INFO  ]  block_db                      : None
>> > [ceph_deploy.cli][INFO  ]  dmcrypt                       : False
>> > [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
>> > [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               :
>> > /etc/ceph/dmcrypt-keys
>> > [ceph_deploy.cli][INFO  ]  quiet                         : False
>> > [ceph_deploy.cli][INFO  ]  debug                         : False
>> > [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data
>> > device /dev/storage/bluestore
>> > [rockpro64-1][DEBUG ] connection detected need for sudo
>> > [rockpro64-1][DEBUG ] connected to host: rockpro64-1
>> > [rockpro64-1][DEBUG ] detect platform information from remote host
>> > [rockpro64-1][DEBUG ] detect machine type
>> > [rockpro64-1][DEBUG ] find the location of an executable
>> > [ceph_deploy.osd][INFO  ] Distro info: debian buster/sid sid
>> > [ceph_deploy.osd][DEBUG ] Deploying osd to rockpro64-1
>> > [rockpro64-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> > [rockpro64-1][WARNIN] osd keyring does not exist yet, creating one
>> > [rockpro64-1][DEBUG ] create a keyring file
>> > [rockpro64-1][DEBUG ] find the location of an executable
>> > [rockpro64-1][INFO  ] Running command: sudo /usr/sbin/ceph-volume
>> > --cluster ceph lvm create --bluestore --data /dev/storage/bluestore
>> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph-authtool --gen-print-key
>> > [rockpro64-1][WARNIN] -->  RuntimeError: command returned non-zero
>> > exit status: 22
>> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
>> > --name client.bootstrap-osd --keyring
>> > /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
>> > 4903fff3-550c-4ce3-aa7d-97193627c6c0
>> > [rockpro64-1][DEBUG ] --> Was unable to complete a new OSD, will
>> > rollback changes
>> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
>> > --name client.bootstrap-osd --keyring
>> > /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0
>> > --yes-i-really-mean-it
>> > [rockpro64-1][DEBUG ]  stderr: no valid command found; 10 closest matches:
>> > [rockpro64-1][DEBUG ] osd tier add-cache <poolname> <poolname> <int[0-]>
>> > [rockpro64-1][DEBUG ] osd tier remove-overlay <poolname>
>> > [rockpro64-1][DEBUG ] osd out <ids> [<ids>...]
>> > [rockpro64-1][DEBUG ] osd in <ids> [<ids>...]
>> > [rockpro64-1][DEBUG ] osd down <ids> [<ids>...]
>> > [rockpro64-1][DEBUG ]  stderr: osd unset
>> > full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim
>> > [rockpro64-1][DEBUG ] osd require-osd-release luminous|mimic
>> > {--yes-i-really-mean-it}
>> > [rockpro64-1][DEBUG ] osd erasure-code-profile ls
>> > [rockpro64-1][DEBUG ] osd set
>> > full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds
>> > {--yes-i-really-mean-it}
>> > [rockpro64-1][DEBUG ] osd erasure-code-profile get <name>
>> > [rockpro64-1][DEBUG ] Error EINVAL: invalid command
>> > [rockpro64-1][ERROR ] RuntimeError: command returned non-zero exit status: 1
>> > [ceph_deploy.osd][ERROR ] Failed to execute command:
>> > /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data
>> > /dev/storage/bluestore
>> > [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>> >
>> > I'm not very familiar with Ceph yet; does anyone have any
>> > troubleshooting suggestions? I found a previous issue
>> > (https://tracker.ceph.com/issues/23918) suggesting this error could be
>> > caused by mismatched package versions, but as far as I can tell,
>> > everything on my system is consistent:
>> >
>> > rock64@rockpro64-1:~/my-cluster$ sudo ceph version
>> > ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable)
>> > rock64@rockpro64-1:~/my-cluster$ dpkg -l | grep -i ceph
>> > ii  ceph
>> > 13.2.1-1                                 arm64        distributed
>> > storage and file system
>> > ii  ceph-base
>> > 13.2.1-1                                 arm64        common ceph
>> > daemon libraries and management tools
>> > ii  ceph-common
>> > 13.2.1-1                                 arm64        common utilities
>> > to mount and interact with a ceph storage cluster
>> > ii  ceph-deploy
>> > 2.0.1                                    all          Ceph-deploy is
>> > an easy to use configuration tool
>> > ii  ceph-mds
>> > 13.2.1-1                                 arm64        metadata server
>> > for the ceph distributed file system
>> > ii  ceph-mgr
>> > 13.2.1-1                                 arm64        manager for the
>> > ceph distributed storage system
>> > ii  ceph-mon
>> > 13.2.1-1                                 arm64        monitor server
>> > for the ceph storage system
>> > ii  ceph-osd
>> > 13.2.1-1                                 arm64        OSD server for
>> > the ceph storage system
>> > ii  libcephfs2
>> > 13.2.1-1                                 arm64        Ceph distributed
>> > file system client library
>> > ii  python-cephfs
>> > 13.2.1-1                                 arm64        Python 2
>> > libraries for the Ceph libcephfs library
>> > ii  python-rados
>> > 13.2.1-1                                 arm64        Python 2
>> > libraries for the Ceph librados library
>> > ii  python-rbd
>> > 13.2.1-1                                 arm64        Python 2
>> > libraries for the Ceph librbd library
>> > ii  python-rgw
>> > 13.2.1-1                                 arm64        Python 2
>> > libraries for the Ceph librgw library
>> >
>> > Thanks,
>> > -- David
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux