On 2024-06-10 21:37, Anthony D'Atri wrote:
You are right here, but we use Ceph mainly for RBD. It performs
'good enough' for our RBD load.
You use RBD for archival?
No, storage for (light-weight) virtual machines.
I'm surprised that it's enough, I've seen HDDs fail miserably in that
role.
The (CPU) load on the OSD nodes is quite low. Our MON/MGR/RGW aren't
hosted on the OSD nodes and are running on modern hardware.
You didn't list additional nodes so I assumed. You might still do
well to have a larger number of RGWs, wherever they run. RGWs often
scale better horizontally than vertically.
Good to know. I'll check if adding more RGW nodes is possible.
To be clear, you don't need more nodes. You can add RGWs to the ones
you already have. You have 12 OSD nodes - why not put an RGW on each?
Might be an option, just don't like the idea to host multiple components
on nodes. But I'll consider it.
There are still serializations in the OSD and PG code. You have 240
OSDs, does your index pool have *at least* 256 PGs?
Index as the data pool has 256 PG's.
To be clear, that means whatever.rgw.buckets.index ?
No, sorry my bad. .index is 32 and .data is 256.
You might also disable Nagle on the RGW nodes.
I need to lookup what that exactly is and does.
It depends on the concurrency setting of Warp.
It look likes the objects/s is the bottleneck, not the throughput.
Max memory usage is about 80-90GB per node. CPU's are quite
idling.
Is it reasonable to expect more IOps / objects/s for RGW with my
setup? At this moment I am not able to find the bottleneck what is
causing the low obj/s.
HDDs are a false economy.
Got it :)
Ceph version is 15.2.
Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx