On Wed, May 26, 2021 at 08:11:41PM +0800, Kai-Heng Feng wrote: > On Wed, May 26, 2021 at 10:49 AM Keith Busch <kbusch@xxxxxxxxxx> wrote: > > > > On Wed, May 26, 2021 at 10:02:27AM +0800, Koba Ko wrote: > > > On Tue, May 25, 2021 at 3:44 PM Christoph Hellwig <hch@xxxxxx> wrote: > > > > > > > > On Thu, May 20, 2021 at 11:33:15AM +0800, Koba Ko wrote: > > > > > After resume, host can't change power state of the closed controller > > > > > from D3cold to D0. > > > > > > > > Why? > > > As per Kai-Heng said, it's a regression introduced by commit > > > b97120b15ebd ("nvme-pci: > > > use simple suspend when a HMB is enabled"). The affected NVMe is using HMB. > > > > That really doesn't add up. The mentioned commit restores the driver > > behavior for HMB drives that existed prior to d916b1be94b6d from kernel > > 5.3. Is that NVMe device broken in pre-5.3 kernels, too? > > Quite likely. The system in question is a late 2020 Ice Lake laptop, > so it was released after 5.3 kernel. This is just a mess. We had to disable the sensible power state based suspend on these systems because Intel broke it by just cutting the power off. And now the shutdown based one doesn't work either because it can't handle d3cold. Someone we need to stop Intel and the integrators from doing stupid things, and I'm not sure how. But degrading all systems even more is just a bad idea, so I fear we'll need a quirk again. Can you figure out by switching the cards if this is the fault of the platform or the nvme device? > > Kai-Heng ---end quoted text---