PGs per OSD guidance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,
   I have been reviewing the sizing of our PGs with a view to some intermittent performance issues.  When we have scrubs running, even when only a few are, we can sometimes get severe impacts on the performance of RBD images, enough to start causing VMs to appear stalled or unresponsive.    When some of these scrubs are running I can see very high latency on some disks which I suspect is what is impacting the performance.  We currently have around 70 PGs per SATA OSD, and 140 PGs per SSD OSD.   These numbers are probably not really reflective as most of the data is in only really half of the pools, so some PGs would be fairly heavy while others are practically empty.   From what I have read we should be able to go significantly higher though.    We are running 10.2.1 if that matters in this context.

 My question is if we increase the numbers of PGs, is that likely to help reduce the scrub impact or spread it wider?  For example, does the mere act of scrubbing one PG mean the underlying disk is going to be hammered and so we will impact more PGs with that load, or would having more PGs mean the time to scrub the PG should be reduced and so the impact will be more disbursed?

I am also curious from a performance stand of view are we better off with more PGs to reduce PG lock contention etc?

Cheers,
 Adrian


Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux