Re: Fwd: Separate --block.wal --block.db bluestore not working as expected.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/7/18 4:21 PM, Alfredo Deza wrote:
On Sat, Apr 7, 2018 at 11:59 AM, Gary Verhulp <garyv@xxxxxxxxxxxxxx> wrote:




I’m trying to create bluestore osds with separate --block.wal --block.db
devices on a write intensive SSD



I’ve split the SSD (/dev/sda) into two partditions sda1 and sda2 for db and
wal





I seems to me the osd uuid is getting changed and I’m only able to start the
last OSD



Do I need to create a new partition or logical volume on the SSD for each
OSD?
Correct! This is what is needed for each OSD. You are re-using the
same partitions for the other OSD which is why you are getting the
following message:



2018-04-06 19:45:43.730515 7fe91a9cfd00 -1 bluestore(/dev/sda1)
_check_or_set_bdev_label bdev /dev/sda1 fsid
eb6cbcb3-f644-4973-b745-0e4389ef719c does not match our fsid
9d7a103a-f590-4842-bd3d-e9da27c3fb09





I’m sure this is a simple fail in my understanding of how it is supposed to
be provisioned.

Any advice would be appreciated.



Thanks,

Gary





[root@osdhost osd]# ceph-volume lvm prepare --bluestore --data /dev/sdc
--block.wal /dev/sda2 --block.db /dev/sda1

Running command: sudo vgcreate --force --yes
ceph-5a6b8ab6-ca12-4855-9a5a-a3a54c249034 /dev/sdc

stdout: Physical volume "/dev/sdc" successfully created.

stdout: Volume group "ceph-5a6b8ab6-ca12-4855-9a5a-a3a54c249034"
successfully created

Running command: sudo lvcreate --yes -l 100%FREE -n
osd-block-9d7a103a-f590-4842-bd3d-e9da27c3fb09
ceph-5a6b8ab6-ca12-4855-9a5a-a3a54c249034

stdout: Logical volume "osd-block-9d7a103a-f590-4842-bd3d-e9da27c3fb09"
created.

Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1

Running command: chown -R ceph:ceph /dev/dm-2

Running command: sudo ln -s
/dev/ceph-5a6b8ab6-ca12-4855-9a5a-a3a54c249034/osd-block-9d7a103a-f590-4842-bd3d-e9da27c3fb09
/var/lib/ceph/osd/ceph-1/block

Running command: sudo ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
/var/lib/ceph/osd/ceph-1/activate.monmap

stderr: got monmap epoch 1

Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring
--create-keyring --name osd.1 --add-key
AQDjL8haKmzYOhAAM7ehRUUgF/n4x/Ybu7VR/g==

stdout: creating /var/lib/ceph/osd/ceph-1/keyring

stdout: added entity osd.1 auth auth(auid = 18446744073709551615
key=AQDjL8haKmzYOhAAM7ehRUUgF/n4x/Ybu7VR/g== with 0 caps)

Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring

Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/

Running command: chown -R ceph:ceph /dev/sda2

Running command: chown -R ceph:ceph /dev/sda1

Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore
--mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --key
**************************************** --bluestore-block-wal-path
/dev/sda2 --bluestore-block-db-path /dev/sda1 --osd-data
/var/lib/ceph/osd/ceph-1/ --osd-uuid 9d7a103a-f590-4842-bd3d-e9da27c3fb09
--setuser ceph --setgroup ceph

stderr: 2018-04-06 19:41:44.519662 7f734f2e4d00 -1
bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode
label at offset 102: buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
end of struct encoding

stderr: 2018-04-06 19:41:44.520939 7f734f2e4d00 -1
bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode
label at offset 102: buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
end of struct encoding

stderr: 2018-04-06 19:41:44.521190 7f734f2e4d00 -1
bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode
label at offset 102: buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
end of struct encoding

stderr: 2018-04-06 19:41:44.521454 7f734f2e4d00 -1
bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid

stderr: 2018-04-06 19:41:47.307648 7f734f2e4d00 -1 key
AQDjL8haKmzYOhAAM7ehRUUgF/n4x/Ybu7VR/g==

stderr: 2018-04-06 19:41:48.068161 7f734f2e4d00 -1 created object store
/var/lib/ceph/osd/ceph-1/ for osd.1 fsid
1ff50434-64ad-42bd-9a70-1968e4a9a813





[root@osdhost osd]# ceph-bluestore-tool show-label --dev /dev/sda1

{

     "/dev/sda1": {

         "osd_uuid": "9d7a103a-f590-4842-bd3d-e9da27c3fb09",

         "size": 200043171840,

         "btime": "2018-04-06 19:41:44.523894",

         "description": "bluefs db"

     }

}



[root@osdhost  osd]# ceph-volume lvm prepare --bluestore --data /dev/sdd
--block.wal /dev/sda2 --block.db /dev/sda1

Running command: sudo vgcreate --force --yes
ceph-cc91203d-de5c-4d27-8c48-a58663075e67 /dev/sdd

stdout: Physical volume "/dev/sdd" successfully created.

stdout: Volume group "ceph-cc91203d-de5c-4d27-8c48-a58663075e67"
successfully created

Running command: sudo lvcreate --yes -l 100%FREE -n
osd-block-eb6cbcb3-f644-4973-b745-0e4389ef719c
ceph-cc91203d-de5c-4d27-8c48-a58663075e67

stdout: Logical volume "osd-block-eb6cbcb3-f644-4973-b745-0e4389ef719c"
created.

Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-6

Running command: chown -R ceph:ceph /dev/dm-8

Running command: sudo ln -s
/dev/ceph-cc91203d-de5c-4d27-8c48-a58663075e67/osd-block-eb6cbcb3-f644-4973-b745-0e4389ef719c
/var/lib/ceph/osd/ceph-6/block

Running command: sudo ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
/var/lib/ceph/osd/ceph-6/activate.monmap

stderr: got monmap epoch 1

Running command: ceph-authtool /var/lib/ceph/osd/ceph-6/keyring
--create-keyring --name osd.6 --add-key
AQA2MMha4FtRFRAAnWV4s4D7/Y9PVZpFBgoLpA==

stdout: creating /var/lib/ceph/osd/ceph-6/keyring

stdout: added entity osd.6 auth auth(auid = 18446744073709551615
key=AQA2MMha4FtRFRAAnWV4s4D7/Y9PVZpFBgoLpA== with 0 caps)

Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/keyring

Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/

Running command: chown -R ceph:ceph /dev/sda2

Running command: chown -R ceph:ceph /dev/sda1

Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore
--mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --key
**************************************** --bluestore-block-wal-path
/dev/sda2 --bluestore-block-db-path /dev/sda1 --osd-data
/var/lib/ceph/osd/ceph-6/ --osd-uuid eb6cbcb3-f644-4973-b745-0e4389ef719c
--setuser ceph --setgroup ceph

stderr: 2018-04-06 19:43:06.855431 7f66aa9e1d00 -1
bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label unable to decode
label at offset 102: buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
end of struct encoding

stderr: 2018-04-06 19:43:06.856732 7f66aa9e1d00 -1
bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label unable to decode
label at offset 102: buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
end of struct encoding

stderr: 2018-04-06 19:43:06.856985 7f66aa9e1d00 -1
bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label unable to decode
label at offset 102: buffer::malformed_input: void
bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
end of struct encoding

stderr: 2018-04-06 19:43:06.857229 7f66aa9e1d00 -1
bluestore(/var/lib/ceph/osd/ceph-6/) _read_fsid unparsable uuid

stderr: 2018-04-06 19:43:09.643778 7f66aa9e1d00 -1 key
AQA2MMha4FtRFRAAnWV4s4D7/Y9PVZpFBgoLpA==

stderr: 2018-04-06 19:43:10.404159 7f66aa9e1d00 -1 created object store
/var/lib/ceph/osd/ceph-6/ for osd.6 fsid
1ff50434-64ad-42bd-9a70-1968e4a9a813





[root@osdhost osd]# ceph-bluestore-tool show-label --dev /dev/sda1

{

     "/dev/sda1": {

         "osd_uuid": "eb6cbcb3-f644-4973-b745-0e4389ef719c",

         "size": 200043171840,

         "btime": "2018-04-06 19:43:06.859357",

         "description": "bluefs db"

     }

}

In this case I’ve made OSD 1 and OSD 6

1 was first and I cannnot start it, but I can start 6





In the OSD log for osd.1  it complains that sda1 fsid
eb6cbcb3-f644-4973-b745-0e4389ef719c does not match our fsid
9d7a103a-f590-4842-bd3d-e9da27c3fb09



018-04-06 19:45:43.397317 7fe91a9cfd00  0 set uid:gid to 1000:1000
(ceph:ceph)

2018-04-06 19:45:43.397351 7fe91a9cfd00  0 ceph version 12.2.2
(cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable), process
(unknown), pid 145234

2018-04-06 19:45:43.402799 7fe91a9cfd00 -1 Public network was set, but
cluster network was not set

2018-04-06 19:45:43.402807 7fe91a9cfd00 -1     Using public network also for
cluster network

2018-04-06 19:45:43.408794 7fe91a9cfd00  0 pidfile_write: ignore empty
--pid-file

2018-04-06 19:45:43.473264 7fe91a9cfd00  0 load: jerasure load: lrc load:
isa

2018-04-06 19:45:43.473380 7fe91a9cfd00  1 bdev create path
/var/lib/ceph/osd/ceph-1/block type kernel

2018-04-06 19:45:43.473394 7fe91a9cfd00  1 bdev(0x562bbfb82200
/var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block

2018-04-06 19:45:43.473802 7fe91a9cfd00  1 bdev(0x562bbfb82200
/var/lib/ceph/osd/ceph-1/block) open size 1800358854656 (0x1a32dc00000, 1676
GB) block_size 4096 (4096 B) rotational

2018-04-06 19:45:43.474000 7fe91a9cfd00  1
bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes max 0.5 < ratio 0.99

2018-04-06 19:45:43.474035 7fe91a9cfd00  1
bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824
meta 0.5 kv 0.5 data 0

2018-04-06 19:45:43.474045 7fe91a9cfd00  1 bdev(0x562bbfb82200
/var/lib/ceph/osd/ceph-1/block) close

2018-04-06 19:45:43.729419 7fe91a9cfd00  1
bluestore(/var/lib/ceph/osd/ceph-1) _mount path /var/lib/ceph/osd/ceph-1

2018-04-06 19:45:43.729695 7fe91a9cfd00  1 bdev create path
/var/lib/ceph/osd/ceph-1/block type kernel

2018-04-06 19:45:43.729709 7fe91a9cfd00  1 bdev(0x562bbfb82a00
/var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block

2018-04-06 19:45:43.730019 7fe91a9cfd00  1 bdev(0x562bbfb82a00
/var/lib/ceph/osd/ceph-1/block) open size 1800358854656 (0x1a32dc00000, 1676
GB) block_size 4096 (4096 B) rotational

2018-04-06 19:45:43.730195 7fe91a9cfd00  1
bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes max 0.5 < ratio 0.99

2018-04-06 19:45:43.730208 7fe91a9cfd00  1
bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824
meta 0.5 kv 0.5 data 0

2018-04-06 19:45:43.730275 7fe91a9cfd00  1 bdev create path /dev/sda1 type
kernel

2018-04-06 19:45:43.730280 7fe91a9cfd00  1 bdev(0x562bbf831800 /dev/sda1)
open path /dev/sda1

2018-04-06 19:45:43.730477 7fe91a9cfd00  1 bdev(0x562bbf831800 /dev/sda1)
open size 200043171840 (0x2e93809000, 186 GB) block_size 4096 (4096 B)
non-rotational

2018-04-06 19:45:43.730485 7fe91a9cfd00  1 bluefs add_block_device bdev 1
path /dev/sda1 size 186 GB

2018-04-06 19:45:43.730515 7fe91a9cfd00 -1 bluestore(/dev/sda1)
_check_or_set_bdev_label bdev /dev/sda1 fsid
eb6cbcb3-f644-4973-b745-0e4389ef719c does not match our fsid
9d7a103a-f590-4842-bd3d-e9da27c3fb09

2018-04-06 19:45:43.730522 7fe91a9cfd00 -1
bluestore(/var/lib/ceph/osd/ceph-1) _open_db check block device(/dev/sda1)
label returned: (5) Input/output error

2018-04-06 19:45:43.730539 7fe91a9cfd00  1 bdev(0x562bbf831800 /dev/sda1)
close

2018-04-06 19:45:43.984716 7fe91a9cfd00  1 bdev(0x562bbfb82a00
/var/lib/ceph/osd/ceph-1/block) close

2018-04-06 19:45:44.234989 7fe91a9cfd00 -1 osd.1 0 OSD:init: unable to mount
object store



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Thanks for this.

I was a bit confused.

I was thinking that it would create the rocks.db on the partition for each OSD, NOT neeting a dedicated partition for each OSD.


Thanks again for the clarification .

Gary

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux