Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 02.07.2014 16:00, schrieb Gregory Farnum:
Yeah, it's fighting for attention with a lot of other urgent stuff. :(

Anyway, even if you can't look up any details or reproduce at this
time, I'm sure you know what shape the cluster was (number of OSDs,
running on SSDs or hard drives, etc), and that would be useful
guidance. :)

Sure

Number of OSDs: 24
Each OSD has an SSD capable of doing tested with fio before installing ceph (70.000 iop/s 4k write, 580MB/s seq. write 1MB blocks)

Single Xeon E5-1620 v2 @ 3.70GHz

48GB RAM

Stefan

-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wed, Jul 2, 2014 at 6:12 AM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:

Am 02.07.2014 15:07, schrieb Haomai Wang:
Could you give some perf counter from rbd client side? Such as op latency?

Sorry haven't any counters. As this mail was some days unseen - i
thought nobody has an idea or could help.

Stefan

On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
Am 02.07.2014 00:51, schrieb Gregory Farnum:
On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
Hi Greg,

Am 26.06.2014 02:17, schrieb Gregory Farnum:
Sorry we let this drop; we've all been busy traveling and things.

There have been a lot of changes to librados between Dumpling and
Firefly, but we have no idea what would have made it slower. Can you
provide more details about how you were running these tests?

it's just a normal fio run:
fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
--readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
--runtime=90 --numjobs=32 --direct=1 --group

Running one time with firefly libs and one time with dumpling libs.
Traget is always the same pool on a firefly ceph storage.

What's the backing cluster you're running against? What kind of CPU
usage do you see with both? 25k IOPS is definitely getting up there,
but I'd like some guidance about whether we're looking for a reduction
in parallelism, or an increase in per-op costs, or something else.

Hi Greg,

i don't have that test cluster anymore. It had to go into production
with dumpling.

So i can't tell you.

Sorry.

Stefan

-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux