Re: 答复: Does this indicate a "CPU bottleneck"?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do you have checked cpu usage on clients ?


also, when you increase number of osd, do you increase pg_num ?


can you provide your fio job config ?

----- Mail original -----
De: "许雪寒" <xuxuehan@xxxxxx>
À: "John Spray" <jspray@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 20 Janvier 2017 07:25:35
Objet: 答复:  Does this indicate a "CPU bottleneck"?

The network is only about 10% full, and we tested the performance with different number of clients, and it turned out that no matter how we increase the number of clients, the result is the same. 

-----邮件原件----- 
发件人: John Spray [mailto:jspray@xxxxxxxxxx] 
发送时间: 2017年1月19日 16:11 
收件人: 许雪寒 
抄送: ceph-users@xxxxxxxxxxxxxx 
主题: Re:  Does this indicate a "CPU bottleneck"? 

On Thu, Jan 19, 2017 at 8:51 AM, 许雪寒 <xuxuehan@xxxxxx> wrote: 
> Hi, everyone. 
> 
> 
> 
> Recently, we did some stress test on ceph using three machines. We 
> tested the IOPS of the whole small cluster when there are 1~8 OSDs per 
> machines separately and the result is as follows: 
> 
> 
> 
> OSD num per machine fio iops 
> 
> 1 
> 10k 
> 
> 2 
> 16.5k 
> 
> 3 
> 22k 
> 
> 4 
> 23.5k 
> 
> 5 
> 26k 
> 
> 6 
> 27k 
> 
> 7 
> 27k 
> 
> 8 
> 28k 
> 
> 
> 
> As shown above, it seems that there is some kind of bottleneck when 
> there are more than 4 OSDs per machine. Meanwhile, we observed that 
> the CPU %idle during the test, shown below, has also some kind of 
> correlation with the number of OSDs per machine. 
> 
> 
> 
> OSD num per machine CPU idle 
> 
> 1 
> 74% 
> 
> 2 
> 52% 
> 
> 3 
> 30% 
> 
> 4 
> 25% 
> 
> 5 
> 24% 
> 
> 6 
> 17% 
> 
> 7 
> 14% 
> 
> 8 
> 11% 
> 
> 
> 
> It seems that with the number of OSDs per machine increasing, the CPU 
> idle time is reducing and the reduce rate Is also decreasing, can we 
> come to the conclusion that CPU is the performance bottleneck in this test? 

Impossible to say without looking at what else was bottlenecked, such as the network or the client. 

John 

> 
> 
> Thank youJ 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux