Re: Number of pgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you only have one pool of significant size, then your PG ratio is around 40 .  IMHO too low.

If you're using HDDs I personally might set to 8192 ; if using NVMe SSDS arguably 16384 -- assuming that your OSD sizes are more or less close to each other.


`ceph osd df` will show toward the right how many PG replicas are on each OSD.

> On Mar 5, 2024, at 14:50, Nikolaos Dandoulakis <nick.dan@xxxxxxxx> wrote:
> 
> Hi Anthony,
> 
> I should have said, it’s replicated (3)
> 
> Best,
> Nick
> 
> Sent from my phone, apologies for any typos!
> From: Anthony D'Atri <aad@xxxxxxxxxxxxxx>
> Sent: Tuesday, March 5, 2024 7:22:42 PM
> To: Nikolaos Dandoulakis <nick.dan@xxxxxxxx>
> Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> Subject: Re:  Number of pgs
>  
> This email was sent to you by someone outside the University.
> You should only click on links or attachments if you are certain that the email is genuine and the content is safe.
> 
> Replicated or EC?
> 
> > On Mar 5, 2024, at 14:09, Nikolaos Dandoulakis <nick.dan@xxxxxxxx> wrote:
> >
> > Hi all,
> >
> > Pretty sure not the first time you see a thread like this.
> >
> > Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail
> >
> > The data pool is 2048 pgs big exactly the same number as when the cluster started. We have no issues with the cluster, everything runs as expected and very efficiently. We support about 1000 clients. The question is should we increase the number of pgs? If you think so, what is the sensible number to go to? 4096? More?
> >
> > I will eagerly await for your response.
> >
> > Best,
> > Nick
> >
> > P.S. Yes, autoscaler is off :)
> > The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th' ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux