Re: Problem with ceph-volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok forget this, I have follow another way

ceph orch apply -i osd_spec.yaml

with this conf

osd_spec.yaml
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
db_devices:
  paths:
    - /dev/nvme0n1
    - /dev/nvme1n1

----- Mail original -----
> De: "Christophe BAILLON" <cb@xxxxxxx>
> À: "ceph-users" <ceph-users@xxxxxxx>
> Envoyé: Mardi 31 Mai 2022 18:15:15
> Objet:  Problem with ceph-volume

> Hello
> 
> On a new cluster, installed with cephadm, I have prepared news osd for separate
> al and db
> To do it I follow this doc :
> https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/
> 
> I run ceph version 17.2.0
> 
> When I shoot the ceph-volume creation I got this error
> 
> root@store-par2-node01:/# ceph-volume lvm create --bluestore --data
> ceph-block-0/block-0 --block.db ceph-db-0/db-0
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> f0be099a-15b1-4eac-b98c-98a4d23d545a
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-block-0/block-0
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-12
> Running command: /usr/bin/ln -s /dev/ceph-block-0/block-0
> /var/lib/ceph/osd/ceph-0/block
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/osd/ceph-0/activate.monmap
> stderr: got monmap epoch 5
> Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
> --create-keyring --name osd.0 --add-key
> AQAuL5Zitj77LRAAgrGtN6vreX33tjDuJiQl9g==
> stdout: creating /var/lib/ceph/osd/ceph-0/keyring
> added entity osd.0 auth(key=AQAuL5Zitj77LRAAgrGtN6vreX33tjDuJiQl9g==)
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-db-0/db-0
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore
> --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile -
> --bluestore-block-db-path /dev/ceph-db-0/db-0 --osd-data
> /var/lib/ceph/osd/ceph-0/ --osd-uuid f0be099a-15b1-4eac-b98c-98a4d23d545a
> --setuser ceph --setgroup ceph
> 
> stderr: 2022-05-31T15:07:28.577+0000 7fbd681d53c0 -1
> bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
> --> ceph-volume lvm prepare successful for: ceph-block-0/block-0
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
> /dev/ceph-block-0/block-0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
> Running command: /usr/bin/ln -snf /dev/ceph-block-0/block-0
> /var/lib/ceph/osd/ceph-0/block
> Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-12
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> Running command: /usr/bin/ln -snf /dev/ceph-db-0/db-0
> /var/lib/ceph/osd/ceph-0/block.db
> Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-db-0/db-0
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
> Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
> Running command: /usr/bin/systemctl enable
> ceph-volume@lvm-0-f0be099a-15b1-4eac-b98c-98a4d23d545a
> stderr: Created symlink
> /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-f0be099a-15b1-4eac-b98c-98a4d23d545a.service
> -> /usr/lib/systemd/system/ceph-volume@.service.
> Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
> stderr: Created symlink
> /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service ->
> /usr/lib/systemd/system/ceph-osd@.service.
> Running command: /usr/bin/systemctl start ceph-osd@0
> stderr: Failed to connect to bus: No such file or directory
> --> Was unable to complete a new OSD, will rollback changes
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0
> --yes-i-really-mean-it
> stderr: purged osd.0
> --> Zapping: /dev/ceph-block-0/block-0
> --> Unmounting /var/lib/ceph/osd/ceph-0
> Running command: /usr/bin/umount -v /var/lib/ceph/osd/ceph-0
> stderr: umount: /var/lib/ceph/osd/ceph-0 unmounted
> Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-block-0/block-0 bs=1M
> count=10 conv=fsync
> stderr: 10+0 records in
> 10+0 records out
> stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0731955 s, 143 MB/s
> --> Only 1 LV left in VG, will proceed to destroy volume group ceph-block-0
> Running command: nsenter --mount=/rootfs/proc/1/ns/mnt
> --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net
> --uts=/rootfs/proc/1/ns/uts /sbin/vgremove -v -f ceph-block-0
> stderr: Removing ceph--block--0-block--0 (253:12)
> stderr: Archiving volume group "ceph-block-0" metadata (seqno 3).
> stderr: Releasing logical volume "block-0"
> stderr: Creating volume group backup "/etc/lvm/backup/ceph-block-0" (seqno 4).
> stdout: Logical volume "block-0" successfully removed
> stderr: Removing physical volume "/dev/sdc" from volume group "ceph-block-0"
> stdout: Volume group "ceph-block-0" successfully removed
> --> Zapping: /dev/ceph-db-0/db-0
> Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-db-0/db-0 bs=1M count=10
> conv=fsync
> stderr: 10+0 records in
> 10+0 records out
> stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0208358 s, 503 MB/s
> --> More than 1 LV left in VG, will proceed to destroy LV only
> --> Removing LV because --destroy was given: /dev/ceph-db-0/db-0
> Running command: nsenter --mount=/rootfs/proc/1/ns/mnt
> --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net
> --uts=/rootfs/proc/1/ns/uts /sbin/lvremove -v -f /dev/ceph-db-0/db-0
> stdout: Logical volume "db-0" successfully removed
> stderr: Removing ceph--db--0-db--0 (253:0)
> stderr: Archiving volume group "ceph-db-0" metadata (seqno 8).
> stderr: Releasing logical volume "db-0"
> stderr: Creating volume group backup "/etc/lvm/backup/ceph-db-0" (seqno 9).
> --> Zapping successful for OSD: 0
> -->  RuntimeError: command returned non-zero exit status: 1
> 
> If somebody can give me some insight, it will be nice ...
> 
> Regards
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

-- 
Christophe BAILLON
Mobile :: +336 16 400 522
Work :: https://eyona.com
Twitter :: https://twitter.com/ctof
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux