Re: 64k buckets for 1 user

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

just last week there was a thread [1] about a large omap warning for a single user with 400k buckets. There's no resharding for that (but with 64k you would stay under the default 200k threshold), so that's the downside, I guess. I can't tell what other impacts that may have.

Regards,
Eugen

[1] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/7LTCHCLH5ACTP7TYDSWOW3S3RJPGWXIY/

Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:

Hi,

We are in a transition where I'd like to ask my user who stores 2B objects in 1 bucket to split it some way. Thinking for the future we identified to make it future proof and don't store huge amount of objects in 1 bucket, we would need to create 65xxx buckets.

Is there anybody aware of any issue with this amount of buckets please?
I guess better to split to multiple buckets rather than have gigantic bucket.

Thank you the advises

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux