Re: pgnum warning and decrease

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 27 Apr 2016 22:55:35 +0000 Carlos M. Perez wrote:

> Hi,
> 
> My current setup is running on 12 OSD's split between 3 hosts.  We're
> using this for VM's (Proxmox) and nothing else.
> 
I assume evenly split (4 OSDs per host)?

> According to:
> http://docs.ceph.com/docs/master/rados/operations/placement-groups/ - my
> pg_num should be set to 4096
> 

That's what you get when people try to simplify what is a rather
complicated matter.
I would have just put the link to PGcalc there.

> If I use the calculator, and put in Size 3, OSD 12, and 200PG target, I
> get 1024.
> 
That's the correct answer and unless you plan to grow (double) your
cluster size around 100PGs per OSD is a good idea.

> So I decided to split the difference, and use 2048, but ceph is warning
> me that I have too many 512 (2048/4)
> 
That's not how ceph gets to that result, it's 
2048(PGs)*3(replication)/12(OSDs).

> root@pve151201:~# ceph -w
>     cluster 9005acf0-17a2-4973-bfe0-55dc9f23786c
>      health HEALTH_WARN
>             too many PGs per OSD (512 > max 300)
>      monmap e3: 3 mons at
> {0=172.31.31.21:6789/0,1=172.31.31.22:6789/0,2=172.31.31.23:6789/0}
> election epoch 8310, quorum 0,1,2 0,1,2 osdmap e32336: 12 osds: 12 up,
> 12 in pgmap v9908729: 2048 pgs, 1 pools, 237 GB data, 62340 objects
>             719 GB used, 10453 GB / 11172 GB avail
>                 2048 active+clean
> 
> # ceph osd pool get rbd pg_num
> pg_num: 2048
> # ceph osd pool get rbd pgp_num
> pgp_num: 2048
> # ceph osd lspools
> 3 rbd,
> # ceph -v
> ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
> 
> Safe to ignore?
> 
If you have enough resources (CPU/RAM mostly) yes.

> If I were to change it to decrease it to 1024, is this a safe way:
> http://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/
> seems to make sense, but I don't have enough ceph experience (and guts)
> to give it a go...
>
You can't decrease PGs, ever. 
The only way is to destroy and re-create the pool(s).
 

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux