Re: irq_build_affinity_masks() allocates improper affinity if num_possible_cpus() > num_present_cpus()?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 06 2020 at 09:37, David Woodhouse wrote:
> On Tue, 2020-10-06 at 06:47 +0000, Dexuan Cui wrote:
>> PS2, the latest Hyper-V provides only one ACPI MADT entry to a 1-CPU VM,
>> so the issue described above can not reproduce there.
>
> It seems fairly easy to reproduce in qemu with -smp 1,maxcpus=128 and a
> virtio-blk drive, having commented out the 'desc->pre_vectors++' around
> line 130 of virtio_pci_common.c so that it does actually spread them.
>
> [    0.836252] i=0, affi = 0,65-127
> [    0.836672] i=1, affi = 1-64
> [    0.837905] virtio_blk virtio1: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB)
> [    0.839080] vda: detected capacity change from 0 to 21474836480
>
> In my build I had to add 'nox2apic' because I think I actually already
> fixed this for the x2apic + no-irq-remapping case with the max_affinity
> patch series¹. But mostly by accident.

There is nothing to fix. It's intentional behaviour. Managed interrupts
and their spreading (aside of the rather odd spread here) work that way.

And virtio-blk works perfectly fine with that.

Thanks,

        tglx




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux