On Wed, Apr 05, 2023 at 07:50:34AM +0200, Petr Tesa????k wrote: > On Wed, 5 Apr 2023 07:32:06 +0200 > Petr Tesa????k <petr@xxxxxxxxxxx> wrote: > > > On Wed, 5 Apr 2023 05:11:42 +0000 > > Dexuan Cui <decui@xxxxxxxxxxxxx> wrote: > > > > > > From: Petr Tesa????k <petr@xxxxxxxxxxx> > > > > Sent: Tuesday, April 4, 2023 9:40 PM > > > > > > ... > > > > > > Hi Petr, this patch has gone into the mainline: > > > > > > 0eee5ae10256 ("swiotlb: fix slot alignment checks") > > > > > > > > > > > > Somehow it breaks Linux VMs on Hyper-V: a regular VM with > > > > > > swiotlb=force or a confidential VM (which uses swiotlb) fails to boot. > > > > > > If I revert this patch, everything works fine. > > > > > > > > > > The log is pasted below. Looks like the SCSI driver hv_storvsc fails to > > > > > detect the disk capacity: > > > > > > > > The first thing I can imagine is that there are in fact no (free) slots > > > > in the SWIOTLB which match the alignment constraints, so the map > > > > operation fails. However, this would result in a "swiotlb buffer is > > > > full" message in the log, and I can see no such message in the log > > > > excerpt you have posted. > > > > > > > > Please, can you check if there are any "swiotlb" messages preceding the > > > > first error message? > > > > > > > > Petr T > > > > > > There is no "swiotlb buffer is full" error. > > > > > > The hv_storvsc driver (drivers/scsi/storvsc_drv.c) calls scsi_dma_map(), > > > which doesn't return -ENOMEM when the failure happens. > > > > I see... > > > > Argh, you're right. This is a braino. The alignment mask is in fact an > > INVERTED mask, i.e. it masks off bits that are not relevant for the > > alignment. The more strict alignment needed the more bits must be set, > > so the individual alignment constraints must be combined with an OR > > instead of an AND. > > > > Can you apply the following change and check if it fixes the issue? > > Actually, this will not work either. The mask is used to mask off both > high address bits and low address bits (below swiotlb slot granularity). > > What should help is this: > Hi Petr, The suggested fix on this patch boots for me and initially looks ok, though when I start to use git commands I get flooded with "swiotlb buffer is full" messages and my session becomes unusable. This is on WSL which uses Hyper-V. I noticed today these same warnings appear when I build kernels while running a 6.1 kernel (i.e. 6.1.21). I couldn't reproduce these messages on a 5.15 kernel and before applying this patch, I've only been able to get the "swiotlb buffer is full" messages to appear during the kernel builds and there's a slight delay caused.. I haven't had a chance to bisect yet to find out more. Should a working version of this patch help to resolve the warnings vs adding more or should I be looking elsewhere? I included a small chunk of my log below. Please let me know if there's anything else I can supply to help out. I appreciate your time and help! -Kelsey [ 123.951630] hv_storvsc fd1d2cbd-ce7c-535c-966b-eb5f811c95f0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots) [ 128.451717] swiotlb_tbl_map_single: 74 callbacks suppressed [ 128.451723] hv_storvsc fd1d2cbd-ce7c-535c-966b-eb5f811c95f0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots) [ 128.511736] hv_storvsc fd1d2cbd-ce7c-535c-966b-eb5f811c95f0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots) [ 128.571704] hv_storvsc fd1d2cbd-ce7c-535c-966b-eb5f811c95f0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots) [ 128.631713] hv_storvsc fd1d2cbd-ce7c-535c-966b-eb5f811c95f0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots) [ 128.691625] hv_storvsc fd1d2cbd-ce7c-535c-966b-eb5f811c95f0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots) > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index 5b919ef832b6..c924e53d679e 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -622,8 +622,7 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index, > dma_addr_t tbl_dma_addr = > phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; > unsigned long max_slots = get_max_slots(boundary_mask); > - unsigned int iotlb_align_mask = > - dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); > + unsigned int iotlb_align_mask; > unsigned int nslots = nr_slots(alloc_size), stride; > unsigned int offset = swiotlb_align_offset(dev, orig_addr); > unsigned int index, slots_checked, count = 0, i; > @@ -639,8 +638,9 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index, > * allocations. > */ > if (alloc_size >= PAGE_SIZE) > - iotlb_align_mask &= PAGE_MASK; > - iotlb_align_mask &= alloc_align_mask; > + iotlb_align_mask |= ~PAGE_MASK; > + iotlb_align_mask |= alloc_align_mask | dma_get_min_align_mask(dev); > + iotlb_align_mask &= ~(IO_TLB_SIZE - 1); > > /* > * For mappings with an alignment requirement don't bother looping to > > Petr T