Re: how to debug slow rbd block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 22.05.2012 21:35, schrieb Greg Farnum:
What does your test look like? With multiple large IOs in flight we can regularly fill up a 1GbE link on our test clusters. With smaller or fewer IOs in flight performance degrades accordingly.

iperf shows 950Mbit/s so this is OK (from KVM host to OSDs)

sorry:
dd if=/dev/zero of=test bs=4M count=1000; dd if=test of=/dev/null bs=4M count=1000;
1000+0 records in
1000+0 records out
4194304000 bytes (4,2 GB) copied, 99,7352 s, 42,1 MB/s

1000+0 records in
1000+0 records out
4194304000 bytes (4,2 GB) copied, 47,4493 s, 88,4 MB/s

Greets
Stefan

On Tuesday, May 22, 2012 at 5:45 AM, Stefan Priebe - Profihost AG wrote:

Hi list,

my ceph block testcluster is now running fine.

Setup:
4x ceph servers
- 3x mon with /mon on local os SATA disk
- 4x OSD with /journal on tmpfs and /srv on intel ssd

all of them use 2x 1Gbit/s lacp trunk.

1x KVM Host system (2x 1Gbit/s lacp trunk)

With one KVM i do not get more than 40MB/s and my network link is just
at 40% of 1Gbit/s.

Is this expected? If not where can i start searching / debugging?

Thanks,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx (mailto:majordomo@xxxxxxxxxxxxxxx)
More majordomo info at http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux