Re: Placement Groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After doing a little more digging, I see each default pool has 2624 PGs.

pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 2624 pgp_num 2624 last_change 38 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 crush_ruleset 1 object_hash rjenkins pg_num 2624 pgp_num 2624 last_change 1 owner 0
pool 2 'rbd' rep size 2 crush_ruleset 2 object_hash rjenkins pg_num 2624 pgp_num 2624 last_change 1 owner 0


On Fri, Mar 1, 2013 at 9:17 AM, Scott Kinder <skinder@xxxxxxxxxxx> wrote:
In my ceph.conf file, I set the options under the [osd] section:

osd pool default pg num = 133
osd pool default pgp num = 133

And yet, after running a mkcephfs, when I do a ceph -s it shows:

pgmap v23972: 7872 pgs: 7872 active+clean;

I should also mention I have 40 OSDs, with a replica level of 3. Am I misunderstanding something, or did it ignore that option?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux