Re: Bluestore cluster example

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,

On Fri, Apr 15, 2016 at 2:06 PM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
> Hi all,
>
> A couple of folks have asked me how to setup bluestore clusters for
> performance testing.  I personally am using cbt for this, but you should be
> able to use ceph-disk with some other cluster creation method as well.
>
> For CBT, you really don't need to do much.  In the old newstore days, a
> "block" symlink needed to be created in the osd data dir to link to the new
> block device.  CBT did this when the "newstore_block: True" option was set
> in the cluster section of the cbt yaml file.  This isn't really needed
> anymore, as you can now specify the block, db, and wal devices directly in
> your ceph.conf file.  If your partitions are setup properly you can create
> bluestore clusters without having to do anything beyond changing the
> ceph.conf file (with cbt at least).
>
> Here's a very basic example:
>
> [global]
>         enable experimental unrecoverable data corrupting features =
> bluestore rocksdb
>         osd objectstore = bluestore
>
> [osd.0]
>         host = incerta01.front.sepia.ceph.com
>         osd data = /tmp/cbt/mnt/osd-device-0-data
>         bluestore block path = /dev/disk/by-partlabel/osd-device-0-block
>         bluestore block db path = /dev/disk/by-partlabel/osd-device-0-db
>         bluestore block wal path = /dev/disk/by-partlabel/osd-device-0-wal

The db and wal paths are optional perf boosting things, right?

If I do ceph-disk prepare --bluestore /dev/sdae, I get:

/dev/sdae :
 /dev/sdae2 ceph block, for /dev/sdae1
 /dev/sdae1 ceph data, active, cluster ceph, osd.413, block /dev/sdae2

-- Dan


>
>
> Here we enable the experimental bluestore and rocksdb features, set the
> objectstore to bluestore, and then in the OSD sections manually set the osd
> data, bluestore block, bluestore block db, and bluestore block wal paths.
> You might be wondering what all of these are for:
>
> osd data <-- very small directory on FS for bootstrapping OSD.
> bluestore block <-- where the actual object data lives
> bluestore block db path <-- where rocksdb lives
> bluestore block wal path <-- where rocksdb writeahead log lives
>
> And that's basically it.  I've uploaded an example partitioning script,
> ceph.conf file, and cbt yaml configuration file based on actual tests I'm
> running to the examples folder that I've actually been using for testing
> here:
>
> https://github.com/ceph/cbt/tree/master/example/bluestore
>
> Thanks,
> Mark
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux