On 8/13/19 3:51 PM, Paul Emmerich wrote:
On Tue, Aug 13, 2019 at 10:04 PM Wido den Hollander <wido@xxxxxxxx> wrote:
I just checked an RGW-only setup. 6TB drive, 58% full, 11.2GB of DB in
use. No slow db in use.
random rgw-only setup here: 12TB drive, 77% full, 48GB metadata and
10GB omap for index and whatever.
That's 0.5% + 0.1%. And that's a setup that's using mostly erasure
coding and small-ish objects.
I've talked with many people from the community and I don't see an
agreement for the 4% rule.
agreed, 4% isn't a reasonable default.
I've seen setups with even 10% metadata usage, but these are weird
edge cases with very small objects on NVMe-only setups (obviously
without a separate DB device).
Paul
I agree, and I did quite a bit of the early space usage analysis. I
have a feeling that someone was trying to be well-meaning and make a
simple ratio for users to target that was big enough to handle the
majority of use cases. The problem is that reality isn't that simple
and one-size-fits all doesn't really work here.
For RBD you can usually get away with far less than 4%. A small
fraction of that is often sufficient. For tiny (say 4K) RGW objects
(especially objects with very long names!) you potentially can end up
using significantly more than 4%. Unfortunately there's no really good
way for us to normalize this so long as RGW is using OMAP to store
bucket indexes. I think the best we can do long run is make it much
clearer how space is being used on the block/db/wal devices and easier
for users to shrink/grow the amount of "fast" disk they have on an OSD.
Alternately we could put bucket indexes into rados objects instead of
OMAP, but that would be a pretty big project (with it's own challenges
but potentially also with rewards).
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com