Hi Stefan,
We are also interested in the bluestore, but did not yet look into
it.
We tried keyvaluestore before and that could be enabled by setting
the osd objectstore value.
And in this ticket http://tracker.ceph.com/issues/13942 I see:
[global]
enable experimental unrecoverable data corrupting features =
*
bluestore fsck on mount = true
bluestore block db size = 67108864
bluestore block wal size = 134217728
bluestore block size = 5368709120
osd objectstore = bluestore
So I guess this could work for bluestore too.
Very curious to hear what you see stability and performance wise :)
Cheers,
Kenneth
On 14/03/16 16:03, Stefan Lissmats
wrote:
Hello everyone!
I think that the new bluestore sounds great and would like
to try it out in my test environment but I didn't find
anything how to use it but I finally managed to test it and it
really looks promising performancewise.
If anyone has more information or guides for bluestore
please tell me where.
I thought I would share how I managed to get a new Jewel
cluster with bluestore based osd:s to work.
What i found so far is that ceph-disk can create new
bluestore osd:s (but not ceph-deploy, plase correct me if i'm
wrong) and I need to have "enable experimental unrecoverable
data corrupting features = bluestore rocksdb" in global
section in ceph.conf.
After that I can create new osd:s with ceph-disk prepare
--bluestore /dev/sdg
So i created a cluster with ceph-deploy without any osd:s
and then used ceph-disk on hosts to create the osd:s.
Pretty simple in the end but it took me a while to figure
that out.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com