Impact of large PG splits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm trying to estimate the possible impact when large PGs are splitted. Here's one example of such a PG:

PG_STAT  OBJECTS  BYTES         OMAP_BYTES*  OMAP_KEYS*  LOG   DISK_LOG    UP
86.3ff 277708 414403098409 0 0 3092 3092 [187,166,122,226,171,234,177,163,155,34,81,239,101,13,117,8,57,111]

Their main application is RGW on EC (currently 1024 PGs on 240 OSDs), 8TB HDDs backed by SSDs. There are 6 RGWs running behind HAProxies. It took me a while to convince them to do a PG split and now they're trying to assess how big the impact could be. The fullest OSD is already at 85% usage, the least filled one at 59%, so there is definitely room for a better balancing which, will be necessary until the new hardware arrives. The current distribution is around 100 PGs per OSD which usually would be fine, but since the PGs are that large only a few PGs difference have a huge impact on the OSD utilization.

I'm targeting 2048 PGs for that pool for now, probably do another split when the new hardware has been integrated.
Any comments are appreciated!

Eugen
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux