Re: Ceph 0.94 (and lower) performance on >1 hosts ??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Well I think the journaling would still appear in the dstat output, as that's still IOs : even if the user-side bandwidth indeed is cut in half, that should not be the case of disks IO.
For instance I just tried a replicated pool for the test, and got around 1300MiB/s in dstat for about 600MiB/s in the rados bench - I take it that indeed, with replication/size=2, there's a total of 2 replicas, so that's 1 user IO for 2 * [1 replicas + 1  journals] / number of hosts => 600*2*2/2 = 1200MiBs of IOs per host (+/- the approximations) ...

Using the dd flag "oflag=sync" indeed lowers the dstat values down to 1100-1300MiB/s. Still above what ceph uses with EC pools .

I have tried to identify/watch interrupt issues (using the watch command), but I have to say I failed until know.
The Broadcom card is indeed spreading the load on the cpus:

# egrep 'CPU|p2p' /proc/interrupts
            CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7       CPU8       CPU9       CPU10      CPU11      CPU12      CPU13      CPU14      CPU15
  80:         88    1646372       1508         30      97328          0      10459        270       2715       8753          0      12765       5100       9148       9420          0   PCI-MSI-edge      p2p1
  82:     179710     165107      94684     334842     210219      47403     270330     166877       3516     229043  709844660      16512       5088       2456    1111312      12302   PCI-MSI-edge      p2p1-fp-0
  83:      12454      14073       5571      15196       5282      22301      11522      21299     409258    1302069       1303      79810  705953243       1836      15190     883683   PCI-MSI-edge      p2p1-fp-1
  84:       6463      13994      57006      16200      16778     374815     558398      11902  695554360      94228       1252      18649     825684       7555     731875     190402   PCI-MSI-edge      p2p1-fp-2
  85:     163228     259899     143625     121326     107509     798435     168027     144088      75321      89962      55297  715175665     784356      53961      92153      92959   PCI-MSI-edge      p2p1-fp-3
  86:    2332674    5322679    2070827    2207971    2254005    1748938    3949283    1684674     650085    1409887    2704778     140711     160954     591037    2981286  672487805   PCI-MSI-edge      p2p1-fp-4
  87:      33772     233318     136341      58163     506773     183451   18269706      52425     226509      22150      17026     176203       5942  681346619     270341      87435   PCI-MSI-edge      p2p1-fp-5
  88:   65103573  105514146   51193688   51330824   41771147   61202946   41053735   49301547     181380   73028922      39525     172439     155778     108065  154750931   26348797   PCI-MSI-edge      p2p1-fp-6
  89:   59287698  120778879   43446789   47063897   39634087   39463210   46582805   48786230     342778   82670325     135397     438041     318995    3642955  179107495     833932   PCI-MSI-edge      p2p1-fp-7
  90:       1804       4453       2434      19885      11527       9771      12724       2392        840      12721        439       1166       3354        560      69386       9233   PCI-MSI-edge      p2p2
  92:    6455149    4330072    5820324    5273513   11564571    1838476    2220049    4039978     977482   15351931     949451    1685983     772531    2718101    7531235    1954224   PCI-MSI-edge      p2p2-fp-0

I don't know yet how to check if there are memory bandwith/latency/whatever issues...

Regards
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux