Re: Tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for this common sense reply from the "Of course if you put it that way" department. Would it even make sense to cut back on the number of ports per initiator to reduce congestion? or would a 10GB switch upgrade make more sense?


On Wednesday, 12 April 2017, 17:39, Ed Cashin <ed@xxxxxxxxxxxxxxx> wrote:


You can echo into the module parameter files under /sys to change the settings on the fly.

If you have more total bandwidth on the sum of the initiator ports (16 Gbps) than for the total bandwidth of the target ports (8 Gbps), heavy write workloads will cause congestion on the target side of the data path.

There's only so much the initiator can do to compensate. 

"Kickme" means that the driver wanted to send more commands to the target but found out there were already the max number of commands waiting for a response. So later it will try again after being kicked by the periodic timer handler routine.

On Mar 31, 2017, at 1:24 PM, ray klassen <julius_ahenobarbus@xxxxxxxxxxx> wrote:

Several years of use on an aoe installation and we seem to be running into performance issues. My theory is that we've hit some kind of ceiling that my rudimentary tuning at install time is not sufficient to deal with

description: dedicated 1Gb switch with target connected with 8 ports and 4 initiators (proxmox nodes) connected with 4 ports each.
Target is running ggaoed. The targets are debian stock installs of aoe-tools. (i may reduce that to 7 ports on the target)

on the target, ifconfig shows a slowly growing number of dropped packets and overruns. about .01% of Rx Packets, which doesn't look too big when put in terms of percentage but probably means an iowait on the initiator because some percentage of writes have to be repeated.

Recently I increased the ring buffer on ggaoed. which seems to have improved things somewhat. Earlier on this mailing list someone referenced the aoe_maxout parameter which I have never set but the /sys/module/aoe/parameters/aoe_maxout records as 128 on all initiators https://www.spinics.net/lists/aoetools/msg00058.html



the recommendation in the earlier message on this mailing list was to set it at 8 instead of 16. 128 is much more than either of those I was wondering if it's auto chosen by the module when it loads.

questions: in /sys/block/etherd\!e0.0/debug, what does the kicked value mean?
                Can aoe kernel module parameters such as aoe_maxout be altered on the fly using sysctl or something?

                (It's tough doing tuning on a live system)


Thank you for any response

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Aoetools-discuss mailing list
Aoetools-discuss@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/aoetools-discuss


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Aoetools-discuss mailing list
Aoetools-discuss@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/aoetools-discuss

[Index of Archives]     [Linux ARM Kernel]     [Linux SCSI]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux