Hello, re-adding list. On Tue, 28 Jun 2016 20:52:51 +0300 George Shuklin wrote: > On 06/28/2016 06:46 PM, Christian Balzer wrote: > > Hello, > > > > On Tue, 28 Jun 2016 18:23:02 +0300 George Shuklin wrote: > > > >> Hello. > >> > >> I'm testing different configuration for Ceph. > > What version... > jewel. > That should be pretty fast by itself, after that the optimizations Alexandre mentioned. > > > >> I found that osd are > >> REALLY hungry for cpu. > >> > > They can be, but unlikely in your case. > > > >> I've created a tiny pool with size 1 with single OSD made of fast > >> intel SSD (2500-series), on old dell server (R210), Xeon E3-1230 V2 > >> @ 3.30GHz. > >> > > At a replication size of 1, a totally unrealistic test scenario. > > > > Ignoring that, an Intel SSD PRO 2500 is a consumer SSD and as such with > > near certainty ill suited for usage with Ceph, especially when it > > comes to journals. > > Check/google the countless threads about what constitutes SSDs > > suitable for Ceph usage. > > I understand that, but the point is that it was stuck at cpu, not IO on > SSD (disk utilization was < 5% according to atop). > That makes little to no sense. > >> I see some horribly-low performance and clear > >> bottleneck at ceph-osd process: it consumes about 110% of CPU and > >> giving > > 110% actual CPU usage? > > I'd wager a significant amount of that is IOWAIT... > No, it was clear computation, not IO. > > It was somehow badly created OSD. I've recreated it Any details on that? So people in the future searching for a problem like this can avoid it. >, and now I'm hitting > limits of SSD performance with ~900 IOPS (with 99% utilization of SSD > and 23% utilization of CPU by ceph-osd). > That ratio and performance sounds more like it, given your SSD model. Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com