Re: Giant to Jewel poor read performance with Rados bench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,

We haven't done any direct giant to jewel comparisons, but I wouldn't expect a drop that big, even for cached tests. How long are you running the test for, and how large are the IOs? Did you upgrade anything else at the same time Ceph was updated?

Mark

On 08/06/2016 03:38 PM, David wrote:
Hi All

I've just installed Jewel 10.2.2 on hardware that has previously been
running Giant. Rados Bench with the default rand and seq tests is giving
me approx 40% of the throughput I used to achieve. On Giant I would get
~1000MB/s (so probably limited by the 10GbE interface), now I'm getting
300 - 400MB/s.

I can see there is no activity on the disks during the bench so the data
is all coming out of cache. The cluster isn't doing anything else during
the test. I'm fairly sure my network is sound, I've done the usual
testing with iperf etc. The write test seems about the same as I used to
get (~400MB/s).

This was a fresh install rather than an upgrade.

Are there any gotchas I should be aware of?

Some more details:

OS: CentOS 7
Kernel: 3.10.0-327.28.2.el7.x86_64
5 nodes (each 10 * 4TB SATA, 2 * Intel dc3700 SSD partitioned up for
journals).
10GbE public network
10GbE cluster network
MTU 9000 on all interfaces and switch
Ceph installed from ceph repo

Ceph.conf is pretty basic (IPs, hosts etc omitted):

filestore_xattr_use_omap = true
osd_journal_size = 10000
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 4096
osd_pool_default_pgp_num = 4096
osd_crush_chooseleaf_type = 1
max_open_files = 131072
mon_clock_drift_allowed = .15
mon_clock_drift_warn_backoff = 30
mon_osd_down_out_interval = 300
mon_osd_report_timeout = 300
mon_osd_full_ratio = .95
mon_osd_nearfull_ratio = .80
osd_backfill_full_ratio = .80

Thanks
David



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux