Slow performances on our Ceph Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we're having big performances issues on our OCP - CEPH rack.
It is designed around 3 Store nodes:

- 2 * Haswell-EP (E5-2620v3)
- 128GB DDR4
- 4 * 240GB SSD
- 2 x 10G Mellanox X3

Each node serving a 30 * 4 TB SAS drives (JBOD) via 2 mini SAS connectors 

So to resume:

3 nodes
90 OSD's
18 pools

Ceph setup was made using default config mode of fuel.

Any idea on how we could increase performances ? as this really impact our openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to 15 minutes... 

This are the pools we have:

pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 '.rgw' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 2 flags hashpspool stripe_width 0
pool 2 'images' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 1021 flags hashpspool stripe_width 0
removed_snaps [1~6,8~6,f~2,12~1,15~8,1f~7,27~1,29~2,2c~8,35~a]
pool 3 'volumes' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 4096 pgp_num 4096 last_change 1006 flags hashpspool stripe_width 0
removed_snaps [1~b]
pool 4 'backups' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 516 flags hashpspool stripe_width 0
pool 5 'compute' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 1018 flags hashpspool stripe_width 0
removed_snaps [1~27]
pool 6 '.rgw.root' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 520 flags hashpspool stripe_width 0
pool 7 '.rgw.control' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 522 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 8 '.rgw.gc' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 524 flags hashpspool stripe_width 0
pool 9 '.users.uid' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 526 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 10 '.users' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 528 flags hashpspool stripe_width 0
pool 12 '.rgw.buckets.index' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 568 flags hashpspool stripe_width 0
pool 13 '.rgw.buckets' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 570 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 14 '.rgw.buckets.extra' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 608 flags hashpspool stripe_width 0
pool 15 '.log' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 614 flags hashpspool stripe_width 0
pool 17 'scbench' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 631 flags hashpspool stripe_width 0
pool 18 'test' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 813 flags hashpspool stripe_width 0



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux