Increasing number of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am working on optimization of ceph performance: CPU load vs OSD data load. Right now I have 576 PGs in total. Three pools: metadata, data, rbd. Each pool has 192 PGs. data is not used heavily, rbd is in heavy use. In total 6 OSDs in cluster. I have read recommendation about 100 PGs per OSD. Roughly I do have 100 PGs per OSD. But now it seems that PGs in data pool are mostly empty while PGs in rbd pool are quite busy. Would it make sense to increase number of PGs to be 100 per rbd pool per OSD? Technically it should take some memory and CPU but because other two pools are virtually stand still it should not make big difference while improving data placement on OSDs (which is currently somewhat skewed).

So here two questions:

1. Should I increase number of PGs in rbd pool or better leave it where it is? 2. Wiki says that increase in PGs is not tested and only should be attempted on empty pool. Date on Wiki is quite old. Is this still an issue? Is it safe to increase PGs number on 0.49 on pool in use?

Regards,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux