How should I deal with placement group numbers when reducing number of OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
we're in the process of changing 480G drives for 1200G drives, which should cut the number of OSDs I have roughly to 1/3.

My largest "volumes" pool for OpenStack volumes has 16384 PGs at the moment and I have 36K PGs in total. That equals to ~180 PGs/OSD and would become ~500 PG/s OSD.

I know I can't actually decrease the number of PGs in a pool, and I'm wondering if it's worth working around to decrease the numbers? It is possible I'll be expanding the storage in the future, but probably not 3-fold. 

I think it's not worth bothering with and I'll just have to disable the "too many PGs per OSD" warning if I upgrade.

I already put some new drives in and the OSDs seem to work fine (though I had to restart them after backfilling - they were spinning CPU for no apparent reason).

Your thoughts?

Thanks
Jan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux