Re: extreme ceph-osd cpu load for rand. 4k write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/08/2012 09:45 AM, Stefan Priebe - Profihost AG wrote:
Am 08.11.2012 16:01, schrieb Sage Weil:
On Thu, 8 Nov 2012, Stefan Priebe - Profihost AG wrote:
Is there any way to find out why a ceph-osd process takes around 10
times more
load on rand 4k writes than on 4k reads?

Something like perf or oprofile is probably your best bet.  perf can be
tedious to deploy, depending on where your kernel is coming from.
oprofile seems to be deprecated, although I've had good results with
it in
the past.

I've recorded 10s with perf - it is now a 300MB perf.data file. Sadly
i've no idea what todo with it next.

Pour yourself a stiff drink! (haha!)

Try just doing a "perf report" in the directory where you've got the data file. Here's a nice tutorial:

https://perf.wiki.kernel.org/index.php/Tutorial

Also, if you see missing symbols you might benefit by chowning the file to root and running perf report as root. If you still see missing symbols, you may want to just give up and try sysprof.


  would love to see where the CPU is spending most of it's time.  This is
on current master?
Yes

 I expect there are still some low-hanging fruit that
can bring CPU utilization down (or even boost iops).
Would be great to find them.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux