RE: Bluestore cluster example

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Mark !

-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Mark Nelson
Sent: Friday, April 15, 2016 5:06 AM
To: cbt
Cc: ceph-devel
Subject: Bluestore cluster example

Hi all,

A couple of folks have asked me how to setup bluestore clusters for performance testing.  I personally am using cbt for this, but you should be able to use ceph-disk with some other cluster creation method as well.

For CBT, you really don't need to do much.  In the old newstore days, a "block" symlink needed to be created in the osd data dir to link to the new block device.  CBT did this when the "newstore_block: True" option was set in the cluster section of the cbt yaml file.  This isn't really needed anymore, as you can now specify the block, db, and wal devices directly in your ceph.conf file.  If your partitions are setup properly you can create bluestore clusters without having to do anything beyond changing the ceph.conf file (with cbt at least).

Here's a very basic example:

[global]
         enable experimental unrecoverable data corrupting features = bluestore rocksdb
         osd objectstore = bluestore

[osd.0]
         host = incerta01.front.sepia.ceph.com
         osd data = /tmp/cbt/mnt/osd-device-0-data
         bluestore block path = /dev/disk/by-partlabel/osd-device-0-block
         bluestore block db path = /dev/disk/by-partlabel/osd-device-0-db
         bluestore block wal path = /dev/disk/by-partlabel/osd-device-0-wal


Here we enable the experimental bluestore and rocksdb features, set the objectstore to bluestore, and then in the OSD sections manually set the osd data, bluestore block, bluestore block db, and bluestore block wal paths.  You might be wondering what all of these are for:

osd data <-- very small directory on FS for bootstrapping OSD.
bluestore block <-- where the actual object data lives bluestore block db path <-- where rocksdb lives bluestore block wal path <-- where rocksdb writeahead log lives

And that's basically it.  I've uploaded an example partitioning script, ceph.conf file, and cbt yaml configuration file based on actual tests I'm running to the examples folder that I've actually been using for testing here:

https://github.com/ceph/cbt/tree/master/example/bluestore

Thanks,
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux