Re: Multi-device BlueStore testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don’t think ceph-disk has support for separating block.db and block.wal yet (?).

You need to create the cluster manually by running mkfs.

Or if you have old mkcephfs script (which sadly deprecated) you can point the db / wal path and it will create cluster for you. I am using that to configure bluestore on multiple devices.

Alternatively, vstart.sh also has support for multiple device bluestore config I believe.

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Stillwell, Bryan J
Sent: Tuesday, July 19, 2016 3:36 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Multi-device BlueStore testing

 

I would like to do some BlueStore testing using multiple devices like mentioned here:

 

 

However, simply creating the block.db and block.wal symlinks and pointing them at empty partitions doesn't appear to be enough:

 

2016-07-19 21:30:15.717827 7f48ec4d9800  1 bluestore(/var/lib/ceph/osd/ceph-0) mount path /var/lib/ceph/osd/ceph-0

2016-07-19 21:30:15.717855 7f48ec4d9800  1 bluestore(/var/lib/ceph/osd/ceph-0) fsck

2016-07-19 21:30:15.717869 7f48ec4d9800  1 bdev create path /var/lib/ceph/osd/ceph-0/block type kernel

2016-07-19 21:30:15.718367 7f48ec4d9800  1 bdev(/var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block

2016-07-19 21:30:15.718462 7f48ec4d9800  1 bdev(/var/lib/ceph/osd/ceph-0/block) open size 6001069202944 (5588 GB) block_size 4096 (4096 B)

2016-07-19 21:30:15.718786 7f48ec4d9800  1 bdev create path /var/lib/ceph/osd/ceph-0/block.db type kernel

2016-07-19 21:30:15.719305 7f48ec4d9800  1 bdev(/var/lib/ceph/osd/ceph-0/block.db) open path /var/lib/ceph/osd/ceph-0/block.db

2016-07-19 21:30:15.719388 7f48ec4d9800  1 bdev(/var/lib/ceph/osd/ceph-0/block.db) open size 1023410176 (976 MB) block_size 4096 (4096 B)

2016-07-19 21:30:15.719394 7f48ec4d9800  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block.db size 976 MB

2016-07-19 21:30:15.719586 7f48ec4d9800 -1 bluestore(/var/lib/ceph/osd/ceph-0/block.db) _read_bdev_label unable to decode label at offset 66: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding

2016-07-19 21:30:15.719597 7f48ec4d9800 -1 bluestore(/var/lib/ceph/osd/ceph-0) _open_db check block device(/var/lib/ceph/osd/ceph-0/block.db) label returned: (22) Invalid argument

2016-07-19 21:30:15.719602 7f48ec4d9800  1 bdev(/var/lib/ceph/osd/ceph-0/block.db) close

2016-07-19 21:30:15.999311 7f48ec4d9800  1 bdev(/var/lib/ceph/osd/ceph-0/block) close

2016-07-19 21:30:16.243312 7f48ec4d9800 -1 osd.0 0 OSD:init: unable to mount object store

 

I originally used 'ceph-disk prepare --bluestore' to create the OSD, but I feel like there is some kind of initialization step I need to do when moving the db and wal over to an NVMe device.  My google searches just aren't turning up much.  Could someone point me in the right direction?

 

Thanks,

Bryan

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux