data on sda with metadata on lvm partition?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all - Hi. After running this dev cluster with a single osd
(/dev/sda) hdd in each node (6), I want to now put the metadata on the
nvme disk which is also used as boot. There is plenty of space left on
the nvme, so I re-did the logical volumes to make a 50gb LV for the
metadata, thinking I'd put the metdata on thenvme/LV and use the entire
/dev/sda as data. Before I really go down this rabbit hole, just want
opinions if this is something that should work?  I've tried both Cepth
15.2.7 & Ceph 15.2.8, each with different errors.  This particular trace
is ceph15.2.8. This is under Rook, so rook is doing:

<snip>
exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sda --db-devices /dev/cephDB/database --report
provision 2021-01-28 01:46:56.186043 D | exec: --> passed data devices: 1 physical, 0 LVM
provision 2021-01-28 01:46:56.186074 D | exec: --> relative data size: 1.0
provision 2021-01-28 01:46:56.186079 D | exec: --> passed block_db devices: 0 physical, 1 LVM
provision 2021-01-28 01:46:56.186092 D | exec: 
provision 2021-01-28 01:46:56.186104 D | exec: Total OSDs: 1
provision 2021-01-28 01:46:56.186107 D | exec: 
provision 2021-01-28 01:46:56.186111 D | exec:   Type            Path                                                    LV Size         % of device
provision 2021-01-28 01:46:56.186114 D | exec: ----------------------------------------------------------------------------------------------------
provision 2021-01-28 01:46:56.186117 D | exec:   data            /dev/sda                                                3.64 TB         100.00%
provision 2021-01-28 01:46:56.186121 D | exec:   block_db        /dev/cephDB/database                                    51.65 GB        10000.00%<snip>

It fails with the stack trace below, essentially complaining it can't
PARTUUID for the LV.

exec: Running command: /usr/sbin/lvcreate --yes -l 953861 -n osd-block-2acd94f6-0fed-423b-8540-ae93c0621c2e ceph-b6da5679-543b-4a79-9cc2-e4e308ba61a4
exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, device-mapper library will manage device nodes in device directory.
exec:  stdout: Logical volume "osd-block-2acd94f6-0fed-423b-8540-ae93c0621c2e" created.
exec: --> blkid could not detect a PARTUUID for device: /dev/cephDB/database
exec: --> Was unable to complete a new OSD, will rollback changes
exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
exec:  stderr: purged osd.0
exec: Traceback (most recent call last):
exec:   File "/usr/sbin/ceph-volume", line 11, in <module>
exec:     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__
exec:     self.main(self.argv)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
exec:     return f(*a, **kw)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main
exec:     terminal.dispatch(self.mapper, subcommand_args)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
exec:     instance.main()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
exec:     terminal.dispatch(self.mapper, self.argv)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
exec:     instance.main()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
exec:     return func(*a, **kw)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 415, in main
exec:     self._execute(plan)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 431, in _execute
exec:     p.safe_prepare(argparse.Namespace(**args))
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
exec:     self.prepare()
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
exec:     return func(*a, **kw)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 382, in prepare
exec:     self.args.block_db_slots)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 189, in setup_device
exec:     name_uuid = self.get_ptuuid(device_name)
exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 135, in get_ptuuid
exec:     raise RuntimeError('unable to use device')
exec: RuntimeError: unable to use device
provision failed to configure devices: failed to initialize devices: failed ceph-volume: exit status 1

Thanks for any ideas/help.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux