On 7/22/22 01:30, Avri Altman wrote:
Measurements have shown that for some UFS devices the maximum sequential I/O throughput is achieved with a transfer size above 512 KiB. Hence increase the maximum size of the data buffer associated with a single request from SCSI_DEFAULT_MAX_SECTORS (1024) * 512 bytes = 512 KiB into 1 GiB.
>
Did you choose 1GB to align with BLK_DEF_MAX_SECTORS?
No particular reason.
Can you share those performance measurements? For some reason, I always thought that SR performance is saturated somewhere around 1MB.
That's also what I see on my test setup (I only tried one UFS brand - results may differ for other UFS device brands):
$ i=12; while ((i<=30)); do ./fio --rw=read --ioengine=psync --direct=1 --ioscheduler=none --size=100% --time_based=1 --runtime=30 --filename=/dev/block/sda --name=ufs --gtod_reduce=1 --bs=$((1<<i)); ((i++)); done 2>&1 | grep read:
read: IOPS=3714, BW=14.5MiB/s (15.2MB/s)(435MiB/30017msec) read: IOPS=2659, BW=20.8MiB/s (21.8MB/s)(623MiB/30003msec) read: IOPS=2488, BW=38.9MiB/s (40.8MB/s)(1167MiB/30016msec) read: IOPS=2102, BW=65.7MiB/s (68.9MB/s)(1972MiB/30006msec) read: IOPS=1635, BW=102MiB/s (107MB/s)(3068MiB/30019msec) read: IOPS=1630, BW=204MiB/s (214MB/s)(6120MiB/30035msec) read: IOPS=1228, BW=307MiB/s (322MB/s)(9232MiB/30061msec) read: IOPS=752, BW=376MiB/s (395MB/s)(11.0GiB/30008msec) read: IOPS=472, BW=473MiB/s (496MB/s)(13.9GiB/30043msec) read: IOPS=107, BW=216MiB/s (226MB/s)(6524MiB/30249msec) read: IOPS=66, BW=267MiB/s (280MB/s)(8184MiB/30666msec) read: IOPS=38, BW=305MiB/s (319MB/s)(9200MiB/30210msec) read: IOPS=18, BW=292MiB/s (306MB/s)(9184MiB/31454msec) read: IOPS=9, BW=302MiB/s (316MB/s)(9.94GiB/33731msec) read: IOPS=5, BW=326MiB/s (342MB/s)(11.9GiB/37278msec) read: IOPS=2, BW=277MiB/s (290MB/s)(15.8GiB/58277msec) read: IOPS=1, BW=302MiB/s (316MB/s)(15.5GiB/52626msec) read: IOPS=0, BW=284MiB/s (298MB/s)(31.0GiB/111736msec) Bart.