Re: [PATCH] swiotlb: Add swiotlb=off to disable SWIOTLB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-03-18 21:31, Florian Fainelli wrote:


On 3/18/2021 12:53 PM, Robin Murphy wrote:
On 2021-03-18 19:43, Florian Fainelli wrote:


On 3/18/2021 12:34 PM, Robin Murphy wrote:
On 2021-03-18 19:22, Florian Fainelli wrote:


On 3/18/2021 12:18 PM, Florian Fainelli wrote:
It may be useful to disable the SWIOTLB completely for testing or
when a
platform is known not to have any DRAM addressing limitations what so
ever.

Isn't that what "swiotlb=noforce" is for? If you're confident that we've
really ironed out *all* the awkward corners that used to blow up if
various internal bits were left uninitialised, then it would make sense
to just tweak the implementation of what we already have.

swiotlb=noforce does prevent dma_direct_map_page() from resorting to the
swiotlb, however what I am also after is reclaiming these 64MB of
default SWIOTLB bounce buffering memory because my systems run with
large amounts of reserved memory into ZONE_MOVABLE and everything in
ZONE_NORMAL is precious at that point.

It also forces io_tlb_nslabs to the minimum, so it should be claiming
considerably less than 64MB. IIRC the original proposal *did* skip
initialisation completely, but that turned up the aforementioned issues.

AFAICT in that case we will have iotlb_n_slabs will set to 1, which will
still make us allocate io_tlb_n_slabs << IO_TLB_SHIFT bytes in
swiotlb_init(), which still gives us 64MB.

Eh? When did 2KB become 64MB? IO_TLB_SHIFT is 11, so that's at most one page in anyone's money...

I wouldn't necessarily disagree with adding "off" as an additional alias
for "noforce", though, since it does come across as a bit wacky for
general use.

Signed-off-by: Florian Fainelli <f.fainelli@xxxxxxxxx>

Christoph, in addition to this change, how would you feel if we
qualified the swiotlb_init() in arch/arm/mm/init.c with a:


if (memblock_end_of_DRAM() >= SZ_4G)
      swiotlb_init(1)

Modulo "swiotlb=force", of course ;)

Indeed, we would need to handle that case as well. Does it sound
reasonable to do that to you as well?

I wouldn't like it done to me personally, but for arm64, observe what
mem_init() in arch/arm64/mm/init.c already does.

In fact I should have looked more closely at that myself - checking debugfs on my 4GB arm64 board actually shows io_tlb_nslabs = 0, and indeed we are bypassing initialisation completely and (ab)using SWIOTLB_NO_FORCE to cover it up, so I guess it probably *is* safe now for the noforce option to do the same for itself and save even that one page.

Robin.



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux