Re: iser_alloc_fmr_pool: FMR allocation failed, err -12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/13/2018 11:55 PM, Sagi Grimberg wrote:
Hi,

Hi Michal,

If I log in too many iser targets (above 250?), logging in fails with FMR allocation error. Then I'm unable to log any target in or out.

Bug is similar to https://www.spinics.net/lists/linux-rdma/msg51639.html, but in my case, I'm using Mellanox OFED ver. 4.2-1.2.0.0 on XenServer 7.5 (kernel 4.4.52) and most of configuration files are missing on my system.

I cannot provide any feedback on Mellanox OFED, I would advise you to
contact Mellanox support for that...


It's not a Mellanox OFED issue. It's a resource issue from the description.



* 3 iSCSI/iSER initiators
* each providing multiple (300? 500?) separate targets
OK.

[ 697.303260] scsi host73: iSCSI Initiator over iSER
[ 697.327380] fmr_pool: fmr_create failed for FMR 168
[ 697.335201] iser: iser_alloc_fmr_pool: FMR allocation failed, err -12
[ 697.335216] iser: iser_alloc_rx_descriptors: failed allocating rx descriptors / data buffers [ 727.121794] iser: iser_disconnected_handler: iscsi_iser connection isn't bound

What device are you using?

Its just that I have a patch set I've been meaning to send out soon that
get rid of fmrs altogether? Is it ConnectX-3? or an older generation?


Sagi,

please send the patch. I don't see a real need to use FMR and we don't do it in NVMf/SRP so let's remove it from iSER as well.



Anyways, regardless this is a generic problem that is not private to
iSER I think. iSER allocates a finite set of resources per session, each
of them has HW resources and memory associated with it.

Now given that the amount of memory varies between device providers its
rather impossible to calculate it in a deterministic way.

Its clear that iSER cannot establish an infinite number of sessions.. Do
you have a target number of sessions you want to reach?

Perhaps you can lower your queue depth? that would introduce nice memory
savings. The default is:
node.session.cmds_max = 128

I assume you don't need 300 sessions with queue depth of 128... Perhaps
you can settle with 64 or 32 instead?



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux