On Thu, 8 Nov 2012, Stefan Priebe - Profihost AG wrote: > Is there any way to find out why a ceph-osd process takes around 10 times more > load on rand 4k writes than on 4k reads? Something like perf or oprofile is probably your best bet. perf can be tedious to deploy, depending on where your kernel is coming from. oprofile seems to be deprecated, although I've had good results with it in the past. would love to see where the CPU is spending most of it's time. This is on current master? I expect there are still some low-hanging fruit that can bring CPU utilization down (or even boost iops). sage > > Stefan > > Am 07.11.2012 21:41, schrieb Stefan Priebe: > > Hello list, > > > > whiling benchmarking i was wondering, why the ceph-osd load is so > > extreme high while having random 4k write i/o. > > > > Here an example while benchmarking: > > > > random 4k write: 16.000 iop/s 180% CPU Load in top from EACH ceph-osd > > process > > > > random 4k read: 16.000 iop/s 19% CPU Load in top from EACH ceph-osd process > > > > seq 4M write: 800MB/s 14% CPU Load in top from EACH ceph-osd process > > > > seq 4M read: 1600MB/s 9% CPU Load in top from EACH ceph-osd process > > > > I can't understand why in this single case the load is so EXTREMELY high. > > > > Greets > > Stefan > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html