Hi, My current setup is running on 12 OSD’s split between 3 hosts. We’re using this for VM’s (Proxmox) and nothing else. According to:
http://docs.ceph.com/docs/master/rados/operations/placement-groups/ - my pg_num should be set to 4096 If I use the calculator, and put in Size 3, OSD 12, and 200PG target, I get 1024.
So I decided to split the difference, and use 2048, but ceph is warning me that I have too many 512 (2048/4)
root@pve151201:~# ceph -w cluster 9005acf0-17a2-4973-bfe0-55dc9f23786c health HEALTH_WARN too many PGs per OSD (512 > max 300) monmap e3: 3 mons at {0=172.31.31.21:6789/0,1=172.31.31.22:6789/0,2=172.31.31.23:6789/0} election epoch 8310, quorum 0,1,2 0,1,2 osdmap e32336: 12 osds: 12 up, 12 in pgmap v9908729: 2048 pgs, 1 pools, 237 GB data, 62340 objects 719 GB used, 10453 GB / 11172 GB avail 2048 active+clean # ceph osd pool get rbd pg_num pg_num: 2048 # ceph osd pool get rbd pgp_num pgp_num: 2048 # ceph osd lspools 3 rbd, # ceph -v ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b) Safe to ignore? If I were to change it to decrease it to 1024, is this a safe way:
http://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/ seems to make sense, but I don’t have enough ceph experience (and guts) to give it a go… Thanks in advance, Carlos M. Perez CMP Consulting Services 305-669-1515 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com