Re: RadosGW (Ceph Object Gateay) Pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I think -> default.rgw.buckets.index for us it reach 2k-6K iops for a index
size of 23GB.

Regards
Manuel



-----Mensaje original-----
De: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> En nombre de
DHilsbos@xxxxxxxxxxxxxx
Enviado el: miércoles, 7 de agosto de 2019 1:41
Para: ceph-users@xxxxxxxxxxxxxx
Asunto:  RadosGW (Ceph Object Gateay) Pools

All;

Based on the PG Calculator, on the Ceph website, I have this list of pools
to pre-create for my Object Gateway:
.rgw.root
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.intent-log
default.rgw.meta
default.rgw.usage
default.rgw.users.keys
default.rgw.users.email
default.rgw.users.uid
default.rgw.buckets.extra
default.rgw.buckets.index
default.rgw.buckets.data

I have a limited amount of SSDs, and I plan to create rules which limit
pools to either HDD or SSD.  My HDDs have their block.db on NVMe devices.

I intend to use the SSDs primarily to back RBD for ISCSi, to support
virtualization, but I'm not opposed to using some of the space to speed up
RGW.

Which pool(s) would have the most impact on the performance of RGW to have
on SSDs?

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux