Re: Fixing a crushmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The process of creating an erasure coded pool and a replicated one is slightly different. You can use Sebastian's guide to create/manage the osd tree, but you should follow this guide http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool.

I'm not sure (i.e. I never tried) to create a EC pool the way you did. The normal replicated ones do work like this.

On Fri, Feb 20, 2015 at 4:49 PM, Kyle Hutson <kylehutson@xxxxxxx> wrote:
I manually edited my crushmap, basing my changes on http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
I have SSDs and HDDs in the same box and was wanting to separate them by ruleset. My current crushmap can be seen at http://pastie.org/9966238

I had it installed and everything looked good....until I created a new pool. All of the new pgs are stuck in "creating". I first tried creating an erasure-coded pool using ruleset 3, then created another pool using ruleset 0. Same result.

I'm not opposed to an 'RTFM' answer, so long as you can point me to the right one. I've seen very little documentation on crushmap rules, in particular.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux