On 11/8/21 12:41, Boris Behrens wrote:
Hi Stefan,
for a 6:1 or 3:1 ration we do not have enough slots (I think).
There is some read but I don't know if this is a lot.
client: 27 MiB/s rd, 289 MiB/s wr, 1.07k op/s rd, 261 op/s wr
That does not seem like a lot. Having SSD based metadata pools might
reduce latency though.
Putting the to use for some special rgw pools also came to my mind.
But would this make a lot of difference?
.rgw.root 1 64 150 KiB 142 26 MiB
0 42 TiB
eu-central-1.rgw.control 2 64 0 B 8 0 B
0 42 TiB
eu-central-1.rgw.data.root 3 64 1.2 MiB 3.96k 743 MiB
0 42 TiB
eu-central-1.rgw.gc 4 64 329 MiB 128 998 MiB
0 42 TiB
eu-central-1.rgw.log 5 64 939 KiB 370 3.1 MiB
0 42 TiB
eu-central-1.rgw.users.uid 6 64 12 MiB 7.10k 1.2 GiB
0 42 TiB
eu-central-1.rgw.users.keys 7 64 297 KiB 7.40k 1.4 GiB
0 42 TiB
eu-central-1.rgw.meta 8 64 392 KiB 1k 191 MiB
0 42 TiB
eu-central-1.rgw.users.email 9 64 40 B 1 192 KiB
0 42 TiB
eu-central-1.rgw.buckets.index 10 64 22 GiB 2.55k 67 GiB
0.05 42 TiB
eu-central-1.rgw.buckets.data 11 2048 318 TiB 132.31M 961 TiB
88.38 42 TiB
eu-central-1.rgw.buckets.non-ec 12 64 467 MiB 13.28k 2.4 GiB
0 42 TiB
eu-central-1.rgw.usage 13 64 767 MiB 32 2.2 GiB
0 42 TiB
I would have put the rgw.buckets.index and maybe the rgw.meta pools on
it, but it looks like a waste of space. Having a 2TB OSD in evey chassis
that only handles 23GB of data.
It does waste a lot of space. But might be worth it if performance
improves a lot. You might also be able to separate small objects from
large objects based on placement targets / storage classes [1]. This
would allow you to store small objects on SSD. Those might be more
latency sensitive than large objects anyway?
Gr. Stefan
[1]: https://docs.ceph.com/en/latest/radosgw/placement/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx