On 9/2/22 07:52, Bean Huo wrote:
On Wed, 2022-08-03 at 09:23 -0700, Bart Van Assche wrote:
On 8/2/22 16:40, yohan.joung@xxxxxx wrote:
Is it possible by adding only max_sector to increase the data
buffer size?
Yes.
I think the data buffer will split to 512 KiB, because the sg_table
size is SG_ALL
I don't think so. With this patch applied, the limits supported by
the
UFS driver are as follows:
.sg_tablesize = SG_ALL, /* 128 */
.max_segment_size = PRDT_DATA_BYTE_COUNT_MAX, /* 256
KiB*/
.max_sectors = (1 << 20) / SECTOR_SIZE, /* 1 MiB
*/
So the maximum data buffer size is min(max_sectors * 512,
sg_tablesize *
max_segment_size) = min(1 MiB, 128 * 256 KiB) = 1 MiB. On a system
with
4 KiB pages, the data buffer size will be 128 * 4 KiB = 512 MiB if
none
of the pages involved in the I/O are contiguous.
This change just increases the shost->max_sectors limit from 501KB to
1Mb, but the final value will be overridden by the optimal transfer
length defined in the VPD, right?
Hi Bean,
It seems to me that the block layer only uses the optimal transfer size
(io_opt) to determine how much data to read ahead during sequential
reads? See also disk_update_readahead().
The above patch increases max_sectors but is not sufficient to increase
the maximum transfer size: SG_ALL (128) * 4 KiB (dma_boundary + 1) = 512
KiB. To increase the maximum transfer size, the dma_boundary parameter
would have to be modified. I have not yet submitted a patch that
modifies that parameter since on my test setup (Exynos host controller)
the current value is the largest value supported.
Thanks,
Bart.