Re: lib/scatterlist.c : sgl_alloc_order promises more than it delivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-09-24 21:55, Douglas Gilbert wrote:
> My code steps down from 1024 KiB elements on failure to 512 KiB and if that
> fails it tries 256 KiB. Then it gives up. The log output is consistent with
> my analysis. So your stated equality is an inequality when length >= 4 GiB.
> There is no promotion of unsigned int nent to uint64_t .
> 
> You can write your own test harness if you don't believe me. The test machine
> doesn't need much ram. Without the call to sgl_free() corrected, if it really
> did try to get that much ram and failed toward the end, then (partially)
> freed up what it had obtained, then you would see a huge memory leak ...> 
> 
> Now your intention seems to be that a 4 GiB sgl should be valid. Correct?
> Can that check just be dropped?

Hi Doug,

When I wrote that code, I did not expect that anyone would try to allocate
4 GiB or more as a single scatterlist. Are there any use cases for which a
4 GiB scatterlist works better than two or more smaller scatterlists?

Do you agree that many hardware DMA engines do not support transferring
4 GiB or more at once?

Thanks,

Bart.



[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux