On 29.01.25 19:46, Harshvardhan Jha wrote:
On 30/01/25 12:13 AM, Jürgen Groß wrote:On 29.01.25 19:35, Harshvardhan Jha wrote:On 29/01/25 4:52 PM, Juergen Gross wrote:On 29.01.25 10:15, Harshvardhan Jha wrote:On 29/01/25 2:34 PM, Greg KH wrote:On Wed, Jan 29, 2025 at 02:29:48PM +0530, Harshvardhan Jha wrote:Hi Greg, On 29/01/25 2:18 PM, Greg KH wrote:On Wed, Jan 29, 2025 at 02:13:34PM +0530, Harshvardhan Jha wrote:Hi there, On 29/01/25 2:05 PM, Greg KH wrote:On Wed, Jan 29, 2025 at 02:03:51PM +0530, Harshvardhan Jha wrote:Hi All, +stable There seems to be some formatting issues in my log output. I have attached it as a file.Confused, what are you wanting us to do here in the stable tree? thanks, greg k-hSince, this is reproducible on 5.4.y I have added stable. The culprit commit which upon getting reverted fixes this issue is also present in 5.4.y stable.What culprit commit? I see no information here :( Remember, top-posting is evil...My apologies, The stable tag v5.4.289 seems to fail to boot with the following prompt in an infinite loop: [ 24.427217] megaraid_sas 0000:65:00.0: megasas_build_io_fusion 3273 sge_count (-12) is out of range. Range is: 0-256 Reverting the following patch seems to fix the issue: stable-5.4 : v5.4.285 - 5df29a445f3a xen/swiotlb: add alignment check for dma buffers I tried changing swiotlb grub command line arguments but that didn't seem to help much unfortunately and the error was seen again.Ok, can you submit this revert with the information about why it should not be included in the 5.4.y tree and cc: everyone involved and then we will be glad to queue it up. thanks, greg k-hThis might be reproducible on other stable trees and mainline as well so we will get it fixed there and I will submit the necessary fix to stable when everything is sorted out on mainline.Right. Just reverting my patch will trade one error with another one (the one which triggered me to write the patch). There are two possible ways to fix the issue: - allow larger DMA buffers in xen/swiotlb (today 2MB are the max. supported size, the megaraid_sas driver seems to effectively request 4MB)This seems relatively simpler to implement but I'm not sure whether it's the most optimal approachJust making the static array larger used to hold the frame numbers for the buffer seems to be a waste of memory for most configurations.Yep definitely not required in most cases.I'm thinking of an allocated array using the max needed size (replace a former buffer with a larger one if needed).This seems like the right way to go.
Can you try the attached patch, please? I don't have a system at hand showing the problem. Juergen
From cff43e997f79a95dc44e02debaeafe5f127f40bb Mon Sep 17 00:00:00 2001 From: Juergen Gross <jgross@xxxxxxxx> Date: Thu, 30 Jan 2025 09:56:57 +0100 Subject: [PATCH] x86/xen: allow larger contiguous memory regions in PV guests Today a PV guest (including dom0) can create 2MB contiguous memory regions for DMA buffers at max. This has led to problems at least with the megaraid_sas driver, which wants to allocate a 2.3MB DMA buffer. The limiting factor is the frame array used to do the hypercall for making the memory contiguous, which has 512 entries and is just a static array in mmu_pv.c. In case a contiguous memory area larger than the initially supported 2MB is requested, allocate a larger buffer for the frame list. Note that such an allocation is tried only after memory management has been initialized properly, which is tested via the early_boot_irqs_disabled flag. Fixes: 9f40ec84a797 ("xen/swiotlb: add alignment check for dma buffers") Signed-off-by: Juergen Gross <jgross@xxxxxxxx> --- Note that the "Fixes:" tag is not really correct, as that patch didn't introduce the problem, but rather made it visible. OTOH it is the best indicator we have to identify kernel versions this patch should be backported to. --- arch/x86/xen/mmu_pv.c | 44 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 37 insertions(+), 7 deletions(-) diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 55a4996d0c04..62aec29b8174 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2200,8 +2200,10 @@ void __init xen_init_mmu_ops(void) } /* Protected by xen_reservation_lock. */ -#define MAX_CONTIG_ORDER 9 /* 2MB */ -static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER]; +#define MIN_CONTIG_ORDER 9 /* 2MB */ +static unsigned int discontig_frames_order = MIN_CONTIG_ORDER; +static unsigned long discontig_frames_early[1UL << MIN_CONTIG_ORDER]; +static unsigned long *discontig_frames = discontig_frames_early; #define VOID_PTE (mfn_pte(0, __pgprot(0))) static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order, @@ -2319,18 +2321,44 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, unsigned int address_bits, dma_addr_t *dma_handle) { - unsigned long *in_frames = discontig_frames, out_frame; + unsigned long *in_frames, out_frame; + unsigned long *new_array, *old_array; unsigned long flags; int success; unsigned long vstart = (unsigned long)phys_to_virt(pstart); - if (unlikely(order > MAX_CONTIG_ORDER)) - return -ENOMEM; + if (unlikely(order > discontig_frames_order)) { + if (early_boot_irqs_disabled) + return -ENOMEM; + + new_array = vmalloc(sizeof(unsigned long) * (1UL << order)); + + if (!new_array) + return -ENOMEM; + + spin_lock_irqsave(&xen_reservation_lock, flags); + + if (order > discontig_frames_order) { + if (discontig_frames == discontig_frames_early) + old_array = NULL; + else + old_array = discontig_frames; + discontig_frames = new_array; + discontig_frames_order = order; + } else + old_array = new_array; + + spin_unlock_irqrestore(&xen_reservation_lock, flags); + + vfree(old_array); + } memset((void *) vstart, 0, PAGE_SIZE << order); spin_lock_irqsave(&xen_reservation_lock, flags); + in_frames = discontig_frames; + /* 1. Zap current PTEs, remembering MFNs. */ xen_zap_pfn_range(vstart, order, in_frames, NULL); @@ -2354,12 +2382,12 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order) { - unsigned long *out_frames = discontig_frames, in_frame; + unsigned long *out_frames, in_frame; unsigned long flags; int success; unsigned long vstart; - if (unlikely(order > MAX_CONTIG_ORDER)) + if (unlikely(order > discontig_frames_order)) return; vstart = (unsigned long)phys_to_virt(pstart); @@ -2367,6 +2395,8 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order) spin_lock_irqsave(&xen_reservation_lock, flags); + out_frames = discontig_frames; + /* 1. Find start MFN of contiguous extent. */ in_frame = virt_to_mfn((void *)vstart); -- 2.43.0
Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key
Attachment:
OpenPGP_signature.asc
Description: OpenPGP digital signature