Re: SQ overflow seen running isert traffic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I tried out this change and it works fine with iwarp. I dont see SQ
overflow. Apparently we have increased the sq too big to overflow. I am going
to let it run with higher workloads for longer time, to see if it holds good.

Actually on second thought, this patch is an overkill. Effectively we
now set:

MAX_CMD=266
and max_rdma_ctx=128 so together we take 394 which seems to too much.

If we go by the scheme of 1 rdma + 1 send for each IO we need:
- 128 sends
- 128 rdmas
- 10 miscs

so this gives 266.

Perhaps this is due to the fact that iWARP needs to register memory for
rdma reads as well? (and also rdma writes > 128k for chelsio HW right?)

What is the workload you are running? with immediatedata enabled you
should issue reg+rdma_read+send only for writes > 8k.

Does this happen when you run only reads for example?

I guess its time to get the sq accounting into shape...
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux