Re: PG bottlenecks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/25/19 10:24 AM, Rafał Wądołowski wrote:
> Hi,
> 
> On one of our cluster (3400 OSD, ~25PB, 12.2.4), we incremented pg_num &
> pgp_num on one pool (EC 4+2) from 32k to 64k. After that cluster started
> to be instable for one hour, pgs were inactive (some activating, some
> peering).
> 
> Any idea what bottlenecks we hit? Any ideas what should I change in
> configuration of ceph/os ?
> 
> 

Hi,

For such cases (PG increase) we usually do it in multiple steps for both
pg_num and pgp_num waiting for cluster to reach all PG active state
after each increase. However we work on replicated pools only.

-- 
PS
_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFS]

  Powered by Linux