On 2020/03/24 9:02, Keith Busch wrote:
On Tue, Mar 24, 2020 at 08:09:19AM +0900, Tokunori Ikegami wrote:
Hi,
The change looks okay, but why do we need such a large data length ?
Do you have a use-case or performance numbers ?
We use the large data length to get log page by the NVMe admin command.
In the past it was able to get with the same length but failed currently
with it.
So it seems that depended on the kernel version as caused by the version up.
We didn't have 32-bit max segments before, though. Why was 16-bits
enough in older kernels? Which kernel did this stop working?
Now I am asking the detail information to the reporter so let me update
later.
That was able to use the same command script with the large data length
in the past.
Also I have confirmed that currently failed with the length 0x10000000
256MB.
If your hitting max segment limits before any other limit, you should be
able to do larger transfers with more physically contiguous memory. Huge
pages can get the same data length in fewer segments, if you want to
try that.
But wouldn't it be better if your application splits the transfer into
smaller chunks across multiple commands? NVMe log page command supports
offsets for this reason.
Yes actually now we are using the offset parameter to split the data to get.
For a future usage it seems that it is better to use the large number
size also.