I see something strange with my tests: 3 nodes (8cores E5420 @ 2.50GHz) , 5 osd (xfs) by node with 15k drives, journal on tmpfs kvm guest, with cache=writeback or cache=none (same result): random write test with 4k block: 5000iop/s , cpu idle 20% sequential write test with 4k block: 20000iop/s , cpu idle 80% (I'm saturating my gibagit link) So what's the difference in osd between random or sequential write if block have same size ? ----- Mail original ----- De: "Stefan Priebe - Profihost AG" <s.priebe@xxxxxxxxxxxx> À: ceph-devel@xxxxxxxxxxxxxxx Envoyé: Vendredi 29 Juin 2012 12:46:42 Objet: speedup ceph / scaling / find the bottleneck Hello list, i've made some further testing and have the problem that ceph doesn't scale for me. I added a 4th osd server to my existing 3 node osd cluster. I also reformated all to be able to start with a clean system. While doing random 4k writes from two VMs i see about 8% idle on the osd servers (Single Intel Xeon E5 8 cores 3,6Ghz). I believe that this is the limiting factor and also the reason why i don't see any improvement by adding osd servers. 3 nodes: 2VMS: 7000 IOp/s 4k writes osds: 7-15% idle 4 nodes: 2VMS: 7500 IOp/s 4k writes osds: 7-15% idle Even the cpu is not the limiting factor i think it would be really important to lower the CPU usage while doing 4k writes. The CPU is only used by the ceph-osd process. I see nearly no usage by other processes (only 5% by kworker and 5% flush). Could somebody recommand me a way to debug this? So we know where all this CPU usage goes? Stefan -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html -- -- Alexandre D e rumier Ingénieur Systèmes et Réseaux Fixe : 03 20 68 88 85 Fax : 03 20 68 90 88 45 Bvd du Général Leclerc 59100 Roubaix 12 rue Marivaux 75002 Paris -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html