> Actually, we are hotplugging not only endpoints, but nested PCIe > switches as well: those are PCIe JBOD chassis (with NVMes and SAS > drives). >From what I remember of playing with some PCI hot-plug hardware and cardbus extenders to PCI chassis many years ago bus numbers were actually a big problem. A surprising number of io cards contain a bridge - so need a bus number if hot-plugged. (In spite of the marketing hype hot-plug basically let you remove a working card and replace it with an identical one! Modern drivers and OS are likely to handle the errors from faulty cards better.) The initial allocation of bus-numbers could easily spread out the unused bus numbers. Doing that for memory resources may have other problems (You probably don't want to allocate the entire range to the root hub?) Are the bus numbers exposed as keys/filename in /sys ? In which case changing them after boot is problematic. You'd need a map of virtual bus numbers to physical ones. As well as your 'suspend/resume' sequence, it should be possible to send a card-remove/insert sequence to an idle driver. There is another case were BARs might need moving. The PCIe spec doesn't allow very long (200ms?) from reset removal (which might be close to power on) and the requirement for endpoints to answer config cycles. If you have to load a large FPGA from a serial EEPROM this is actually a real problem. So if the OS does a full probe of the PCI devices it may find endpoints (or even bridges) that weren't actually there when the BIOS (or equivalent) did its earlier probe. Finding space in the middle of the pci devices for an endpoint that wants two 1MB BARs is unlikely to suceed! David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)