Re: Any disadvantage to go above the 100pg/osd or 4osd/disk?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good to know thank you, so in that case during recovery it worth to increase those values right?

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: Friday, September 23, 2022 1:19 PM
To: ceph-users@xxxxxxx
Subject:  Re: Any disadvantage to go above the 100pg/osd or 4osd/disk?

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi,

I can't speak from the developers perspective, but we discussed this just recently intenally and with a customer. We doubled the number of PGs on one of our customer's data pools from around 100 to 200 PGs/OSD (HDDs with rocksDB on SSDs). We're still waiting for the final conclusion if the performance has increased or not, but it seems to work as expected. We probably would double it again if the PG size/objects per PG would affect the performance again. You just need to be aware of the mon_max_pg_per_osd and osd_max_pg_per_osd_hard_ratio configs in case of recovery. Otherwise we don't see any real issue with 200 or 400 PGs/OSD if the nodes can handle it.

Regards,
Eugen

Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:

> Hi,
>
> My question is, is there any technical limit to have 8osd/ssd and on
> each of them 100pg if the memory and cpu resource available (8gb
> memory/osd and 96vcore)?
> The iops and bandwidth on the disks are very low so I don’t see any
> issue to go with this.
>
> In my cluster I’m using 15.3TB ssds. We have more than 2 billions of
> objects in each of the 3 clusters.
> The bottleneck is the pg/osd so last time when my serious issue solved
> the solution was to bump the pg-s of the data pool the allowed maximum
> with 4:2 ec.
>
> I’m curious of the developers opinion also.
>
> Thank you,
> Istvan
>
> ________________________________
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by
> copyright or other legal rules. If you have received it by mistake
> please let us know by reply email and delete it from your system. It
> is prohibited to copy this message or disclose its content to anyone.
> Any confidentiality or privilege is not waived or lost by any mistaken
> delivery or unauthorized disclosure of the message. All messages sent
> to and from Agoda may be monitored to ensure compliance with company
> policies, to protect the company's interests and to remove potential
> malware. Electronic messages may be intercepted, amended, lost or
> deleted, or contain viruses.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux