Re: speedup ceph / scaling / find the bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/1/12 4:01 PM, Stefan Priebe wrote:
Hello list,
Hello sage,

i've made some further tests.

Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops

Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops

When i make random 4k writes over 100MB: 450% CPU usage of kvm process
and !! 25059 iops !!


When you say 100MB vs 200GB, do you mean the total amount of data that is written for the test? Also, are these starting out on a fresh filesystem? Recently I've been working on tracking down an issue where small write performance is degrading as data is written. The tests I've done have been for sequential writes, but I wonder if the problem may be significantly worse with random writes.

Random 4k writes over 1GB: 380% CPU usage of kvm process 14387 iops

So the range where the random I/O happen seem to be important and the
cpu usage just seem to reflect the iops.

So i'm not sure if the problem is really the client rbd driver. Mark i
hope you can make some tests next week.

I need to get perf setup on our test boxes, but once I do that I'm hoping to follow up on this.


Greets
Stefan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux