Re: lib/scatterlist.c : sgl_alloc_order promises more than it delivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-09-24 21:55, Douglas Gilbert wrote:
> My code steps down from 1024 KiB elements on failure to 512 KiB and if that
> fails it tries 256 KiB. Then it gives up. The log output is consistent with
> my analysis. So your stated equality is an inequality when length >= 4 GiB.
> There is no promotion of unsigned int nent to uint64_t .
> 
> You can write your own test harness if you don't believe me. The test machine
> doesn't need much ram. Without the call to sgl_free() corrected, if it really
> did try to get that much ram and failed toward the end, then (partially)
> freed up what it had obtained, then you would see a huge memory leak ...> 
> 
> Now your intention seems to be that a 4 GiB sgl should be valid. Correct?
> Can that check just be dropped?

Hi Doug,

When I wrote that code, I did not expect that anyone would try to allocate
4 GiB or more as a single scatterlist. Are there any use cases for which a
4 GiB scatterlist works better than two or more smaller scatterlists?

Do you agree that many hardware DMA engines do not support transferring
4 GiB or more at once?

Thanks,

Bart.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux