Re: linux-6.2-rc4+ hangs on poweroff/reboot: Bisected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]<

 



On Sun, 19 Feb 2023 at 04:55, Chris Clayton <chris2553@xxxxxxxxxxxxxx> wrote:
>
>
>
> On 18/02/2023 15:19, Chris Clayton wrote:
> >
> >
> > On 18/02/2023 12:25, Karol Herbst wrote:
> >> On Sat, Feb 18, 2023 at 1:22 PM Chris Clayton <chris2553@xxxxxxxxxxxxxx> wrote:
> >>>
> >>>
> >>>
> >>> On 15/02/2023 11:09, Karol Herbst wrote:
> >>>> On Wed, Feb 15, 2023 at 11:36 AM Linux regression tracking #update
> >>>> (Thorsten Leemhuis) <regressions@xxxxxxxxxxxxx> wrote:
> >>>>>
> >>>>> On 13.02.23 10:14, Chris Clayton wrote:
> >>>>>> On 13/02/2023 02:57, Dave Airlie wrote:
> >>>>>>> On Sun, 12 Feb 2023 at 00:43, Chris Clayton <chris2553@xxxxxxxxxxxxxx> wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 10/02/2023 19:33, Linux regression tracking (Thorsten Leemhuis) wrote:
> >>>>>>>>> On 10.02.23 20:01, Karol Herbst wrote:
> >>>>>>>>>> On Fri, Feb 10, 2023 at 7:35 PM Linux regression tracking (Thorsten
> >>>>>>>>>> Leemhuis) <regressions@xxxxxxxxxxxxx> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> On 08.02.23 09:48, Chris Clayton wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> I'm assuming  that we are not going to see a fix for this regression before 6.2 is released.
> >>>>>>>>>>>
> >>>>>>>>>>> Yeah, looks like it. That's unfortunate, but happens. But there is still
> >>>>>>>>>>> time to fix it and there is one thing I wonder:
> >>>>>>>>>>>
> >>>>>>>>>>> Did any of the nouveau developers look at the netconsole captures Chris
> >>>>>>>>>>> posted more than a week ago to check if they somehow help to track down
> >>>>>>>>>>> the root of this problem?
> >>>>>>>>>>
> >>>>>>>>>> I did now and I can't spot anything. I think at this point it would
> >>>>>>>>>> make sense to dump the active tasks/threads via sqsrq keys to see if
> >>>>>>>>>> any is in a weird state preventing the machine from shutting down.
> >>>>>>>>>
> >>>>>>>>> Many thx for looking into it!
> >>>>>>>>
> >>>>>>>> Yes, thanks Karol.
> >>>>>>>>
> >>>>>>>> Attached is the output from dmesg when this block of code:
> >>>>>>>>
> >>>>>>>>         /bin/mount /dev/sda7 /mnt/sda7
> >>>>>>>>         /bin/mountpoint /proc || /bin/mount /proc
> >>>>>>>>         /bin/dmesg -w > /mnt/sda7/sysrq.dmesg.log &
> >>>>>>>>         /bin/echo t > /proc/sysrq-trigger
> >>>>>>>>         /bin/sleep 1
> >>>>>>>>         /bin/sync
> >>>>>>>>         /bin/sleep 1
> >>>>>>>>         kill $(pidof dmesg)
> >>>>>>>>         /bin/umount /mnt/sda7
> >>>>>>>>
> >>>>>>>> is executed immediately before /sbin/reboot is called as the final step of rebooting my system.
> >>>>>>>>
> >>>>>>>> I hope this is what you were looking for, but if not, please let me know what you need
> >>>>>>
> >>>>>> Thanks Dave. [...]
> >>>>> FWIW, in case anyone strands here in the archives: the msg was
> >>>>> truncated. The full post can be found in a new thread:
> >>>>>
> >>>>> https://lore.kernel.org/lkml/e0b80506-b3cf-315b-4327-1b988d86031e@xxxxxxxxxxxxxx/
> >>>>>
> >>>>> Sadly it seems the info "With runpm=0, both reboot and poweroff work on
> >>>>> my laptop." didn't bring us much further to a solution. :-/ I don't
> >>>>> really like it, but for regression tracking I'm now putting this on the
> >>>>> back-burner, as a fix is not in sight.
> >>>>>
> >>>>> #regzbot monitor:
> >>>>> https://lore.kernel.org/lkml/e0b80506-b3cf-315b-4327-1b988d86031e@xxxxxxxxxxxxxx/
> >>>>> #regzbot backburner: hard to debug and apparently rare
> >>>>> #regzbot ignore-activity
> >>>>>
> >>>>
> >>>> yeah.. this bug looks a little annoying. Sadly the only Turing based
> >>>> laptop I got doesn't work on Nouveau because of firmware related
> >>>> issues and we probably need to get updated ones from Nvidia here :(
> >>>>
> >>>> But it's a bit weird that the kernel doesn't shutdown, because I don't
> >>>> see anything in the logs which would prevent that from happening.
> >>>> Unless it's waiting on one of the tasks to complete, but none of them
> >>>> looked in any way nouveau related.
> >>>>
> >>>> If somebody else has any fancy kernel debugging tips here to figure
> >>>> out why it hangs, that would be very helpful...
> >>>>
> >>>
> >>> I think I've figured this out. It's to do with how my system is configured. I do have an initrd, but the only thing on
> >>> it is the cpu microcode which, it is recommended, should be loaded early. The absence of the NVidia firmare from an
> >>> initrd doesn't matter because the drivers for the hardware that need to load firmware are all built as modules, So, by
> >>> the time the devices are configured via udev, the root partition is mounted and the drivers can get at the firmware.
> >>>
> >>> I've found, by turning on nouveau debug and taking a video of the screen as the system shuts down, that nouveau seems to
> >>> be trying to run the scrubber very very late in the shutdown process. The problem is that by this time, I think the root
> >>> partition, and thus the scrubber binary, have become inaccessible.
> >>>
> >>> I seem to have two choices - either make the firmware accessible on an initrd or unload the module in a shutdown script
> >>> before the scrubber binary becomes inaccessible. The latter of these is the workaround I have implemented whilst the
> >>> problem I reported has been under investigation. For simplicity, I think I'll promote my workaround to being the
> >>> permanent solution.
> >>>
> >>> So, apologies (and thanks) to everyone whose time I have taken up with this non-bug.
> >>>
> >>
> >> Well.. nouveau shouldn't prevent the system from shutting down if the
> >> firmware file isn't available. Or at least it should print a
> >> warning/error. Mind messing with the code a little to see if skipping
> >> it kind of works? I probably can also come up with a patch by next
> >> week.
> >>
> > Well, I'd love to but a quick glance at the code caused me to bump into this obscenity:
> >
> > int
> > gm200_flcn_reset_wait_mem_scrubbing(struct nvkm_falcon *falcon)
> > {
> >         nvkm_falcon_mask(falcon, 0x040, 0x00000000, 0x00000000);
> >
> >         if (nvkm_msec(falcon->owner->device, 10,
> >                 if (!(nvkm_falcon_rd32(falcon, 0x10c) & 0x00000006))
> >                         break;
> >         ) < 0)
> >                 return -ETIMEDOUT;
> >
> >         return 0;
> > }
> >
> > nvkm_msec is #defined to nvkm_usec which in turn is #defined to nvkm_nsec where the loop that the break is related to
> > appears
>
> I think someone who knows the code needs to look at this. What I can confirm is that after a freeze, I waited for 90
> seconds for a timeout to occur, but it didn't.
Hey,

Are you able to try the attached patch for me please?

Thanks,
Ben.

>
>
> .> Chris
> >>>
> >>>>> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
> >>>>> --
> >>>>> Everything you wanna know about Linux kernel regression tracking:
> >>>>> https://linux-regtracking.leemhuis.info/about/#tldr
> >>>>> That page also explains what to do if mails like this annoy you.
> >>>>>
> >>>>> #regzbot ignore-activity
> >>>>>
> >>>>
> >>>
> >>
From 931ace529a73d3b1427b366a70635de9ab3adf0f Mon Sep 17 00:00:00 2001
From: Ben Skeggs <bskeggs@xxxxxxxxxx>
Date: Mon, 20 Feb 2023 14:39:21 +1000
Subject: [PATCH] drm/nouveau/fb/gp102-: cache scrubber binary on first load

Signed-off-by: Ben Skeggs <bskeggs@xxxxxxxxxx>
---
 .../gpu/drm/nouveau/include/nvkm/subdev/fb.h  |  3 +-
 drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c |  8 +++-
 .../gpu/drm/nouveau/nvkm/subdev/fb/ga100.c    |  2 +-
 .../gpu/drm/nouveau/nvkm/subdev/fb/ga102.c    | 21 ++++------
 .../gpu/drm/nouveau/nvkm/subdev/fb/gp102.c    | 41 +++++++------------
 .../gpu/drm/nouveau/nvkm/subdev/fb/gv100.c    |  4 +-
 drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h |  3 +-
 .../gpu/drm/nouveau/nvkm/subdev/fb/tu102.c    |  4 +-
 8 files changed, 36 insertions(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
index c5a4f49ee206..01a22a13b452 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
@@ -2,6 +2,7 @@
 #ifndef __NVKM_FB_H__
 #define __NVKM_FB_H__
 #include <core/subdev.h>
+#include <core/falcon.h>
 #include <core/mm.h>
 
 /* memory type/access flags, do not match hardware values */
@@ -33,7 +34,7 @@ struct nvkm_fb {
 	const struct nvkm_fb_func *func;
 	struct nvkm_subdev subdev;
 
-	struct nvkm_blob vpr_scrubber;
+	struct nvkm_falcon_fw vpr_scrubber;
 
 	struct {
 		struct page *flush_page;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
index bac7dcc4c2c1..0955340cc421 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
@@ -143,6 +143,10 @@ nvkm_fb_mem_unlock(struct nvkm_fb *fb)
 	if (!fb->func->vpr.scrub_required)
 		return 0;
 
+	ret = nvkm_subdev_oneinit(subdev);
+	if (ret)
+		return ret;
+
 	if (!fb->func->vpr.scrub_required(fb)) {
 		nvkm_debug(subdev, "VPR not locked\n");
 		return 0;
@@ -150,7 +154,7 @@ nvkm_fb_mem_unlock(struct nvkm_fb *fb)
 
 	nvkm_debug(subdev, "VPR locked, running scrubber binary\n");
 
-	if (!fb->vpr_scrubber.size) {
+	if (!fb->vpr_scrubber.fw.img) {
 		nvkm_warn(subdev, "VPR locked, but no scrubber binary!\n");
 		return 0;
 	}
@@ -229,7 +233,7 @@ nvkm_fb_dtor(struct nvkm_subdev *subdev)
 
 	nvkm_ram_del(&fb->ram);
 
-	nvkm_blob_dtor(&fb->vpr_scrubber);
+	nvkm_falcon_fw_dtor(&fb->vpr_scrubber);
 
 	if (fb->sysmem.flush_page) {
 		dma_unmap_page(subdev->device->dev, fb->sysmem.flush_page_addr,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c
index 5098f219e3e6..a7456e786463 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c
@@ -37,5 +37,5 @@ ga100_fb = {
 int
 ga100_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)
 {
-	return gp102_fb_new_(&ga100_fb, device, type, inst, pfb);
+	return gf100_fb_new_(&ga100_fb, device, type, inst, pfb);
 }
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c
index 5a21b0ae4595..dd476e079fe1 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c
@@ -25,25 +25,20 @@
 #include <engine/nvdec.h>
 
 static int
-ga102_fb_vpr_scrub(struct nvkm_fb *fb)
+ga102_fb_oneinit(struct nvkm_fb *fb)
 {
-	struct nvkm_falcon_fw fw = {};
-	int ret;
+	struct nvkm_subdev *subdev = &fb->subdev;
 
-	ret = nvkm_falcon_fw_ctor_hs_v2(&ga102_flcn_fw, "mem-unlock", &fb->subdev, "nvdec/scrubber",
-					0, &fb->subdev.device->nvdec[0]->falcon, &fw);
-	if (ret)
-		return ret;
+	nvkm_falcon_fw_ctor_hs_v2(&ga102_flcn_fw, "mem-unlock", subdev, "nvdec/scrubber",
+				  0, &subdev->device->nvdec[0]->falcon, &fb->vpr_scrubber);
 
-	ret = nvkm_falcon_fw_boot(&fw, &fb->subdev, true, NULL, NULL, 0, 0);
-	nvkm_falcon_fw_dtor(&fw);
-	return ret;
+	return gf100_fb_oneinit(fb);
 }
 
 static const struct nvkm_fb_func
 ga102_fb = {
 	.dtor = gf100_fb_dtor,
-	.oneinit = gf100_fb_oneinit,
+	.oneinit = ga102_fb_oneinit,
 	.init = gm200_fb_init,
 	.init_page = gv100_fb_init_page,
 	.init_unkn = gp100_fb_init_unkn,
@@ -51,13 +46,13 @@ ga102_fb = {
 	.ram_new = ga102_ram_new,
 	.default_bigpage = 16,
 	.vpr.scrub_required = tu102_fb_vpr_scrub_required,
-	.vpr.scrub = ga102_fb_vpr_scrub,
+	.vpr.scrub = gp102_fb_vpr_scrub,
 };
 
 int
 ga102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)
 {
-	return gp102_fb_new_(&ga102_fb, device, type, inst, pfb);
+	return gf100_fb_new_(&ga102_fb, device, type, inst, pfb);
 }
 
 MODULE_FIRMWARE("nvidia/ga102/nvdec/scrubber.bin");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c
index 2658481d575b..14d942e8b857 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c
@@ -29,18 +29,7 @@
 int
 gp102_fb_vpr_scrub(struct nvkm_fb *fb)
 {
-	struct nvkm_subdev *subdev = &fb->subdev;
-	struct nvkm_falcon_fw fw = {};
-	int ret;
-
-	ret = nvkm_falcon_fw_ctor_hs(&gm200_flcn_fw, "mem-unlock", subdev, NULL,
-				     "nvdec/scrubber", 0, &subdev->device->nvdec[0]->falcon, &fw);
-	if (ret)
-		return ret;
-
-	ret = nvkm_falcon_fw_boot(&fw, subdev, true, NULL, NULL, 0, 0x00000000);
-	nvkm_falcon_fw_dtor(&fw);
-	return ret;
+	return nvkm_falcon_fw_boot(&fb->vpr_scrubber, &fb->subdev, true, NULL, NULL, 0, 0x00000000);
 }
 
 bool
@@ -51,10 +40,21 @@ gp102_fb_vpr_scrub_required(struct nvkm_fb *fb)
 	return (nvkm_rd32(device, 0x100cd0) & 0x00000010) != 0;
 }
 
+int
+gp102_fb_oneinit(struct nvkm_fb *fb)
+{
+	struct nvkm_subdev *subdev = &fb->subdev;
+
+	nvkm_falcon_fw_ctor_hs(&gm200_flcn_fw, "mem-unlock", subdev, NULL, "nvdec/scrubber",
+			       0, &subdev->device->nvdec[0]->falcon, &fb->vpr_scrubber);
+
+	return gf100_fb_oneinit(fb);
+}
+
 static const struct nvkm_fb_func
 gp102_fb = {
 	.dtor = gf100_fb_dtor,
-	.oneinit = gf100_fb_oneinit,
+	.oneinit = gp102_fb_oneinit,
 	.init = gm200_fb_init,
 	.init_remapper = gp100_fb_init_remapper,
 	.init_page = gm200_fb_init_page,
@@ -64,23 +64,10 @@ gp102_fb = {
 	.ram_new = gp100_ram_new,
 };
 
-int
-gp102_fb_new_(const struct nvkm_fb_func *func, struct nvkm_device *device,
-	      enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)
-{
-	int ret = gf100_fb_new_(func, device, type, inst, pfb);
-	if (ret)
-		return ret;
-
-	nvkm_firmware_load_blob(&(*pfb)->subdev, "nvdec/scrubber", "", 0,
-				&(*pfb)->vpr_scrubber);
-	return 0;
-}
-
 int
 gp102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)
 {
-	return gp102_fb_new_(&gp102_fb, device, type, inst, pfb);
+	return gf100_fb_new_(&gp102_fb, device, type, inst, pfb);
 }
 
 MODULE_FIRMWARE("nvidia/gp102/nvdec/scrubber.bin");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
index 0e3c0a8f5d71..4d8a286a7a34 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
@@ -31,7 +31,7 @@ gv100_fb_init_page(struct nvkm_fb *fb)
 static const struct nvkm_fb_func
 gv100_fb = {
 	.dtor = gf100_fb_dtor,
-	.oneinit = gf100_fb_oneinit,
+	.oneinit = gp102_fb_oneinit,
 	.init = gm200_fb_init,
 	.init_page = gv100_fb_init_page,
 	.init_unkn = gp100_fb_init_unkn,
@@ -45,7 +45,7 @@ gv100_fb = {
 int
 gv100_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)
 {
-	return gp102_fb_new_(&gv100_fb, device, type, inst, pfb);
+	return gf100_fb_new_(&gv100_fb, device, type, inst, pfb);
 }
 
 MODULE_FIRMWARE("nvidia/gv100/nvdec/scrubber.bin");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
index f517751f94ac..726c30c8bf95 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
@@ -83,8 +83,7 @@ int gm200_fb_init_page(struct nvkm_fb *);
 void gp100_fb_init_remapper(struct nvkm_fb *);
 void gp100_fb_init_unkn(struct nvkm_fb *);
 
-int gp102_fb_new_(const struct nvkm_fb_func *, struct nvkm_device *, enum nvkm_subdev_type, int,
-		  struct nvkm_fb **);
+int gp102_fb_oneinit(struct nvkm_fb *);
 bool gp102_fb_vpr_scrub_required(struct nvkm_fb *);
 int gp102_fb_vpr_scrub(struct nvkm_fb *);
 
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c
index be82af0364ee..b8803c124c3b 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c
@@ -31,7 +31,7 @@ tu102_fb_vpr_scrub_required(struct nvkm_fb *fb)
 static const struct nvkm_fb_func
 tu102_fb = {
 	.dtor = gf100_fb_dtor,
-	.oneinit = gf100_fb_oneinit,
+	.oneinit = gp102_fb_oneinit,
 	.init = gm200_fb_init,
 	.init_page = gv100_fb_init_page,
 	.init_unkn = gp100_fb_init_unkn,
@@ -45,7 +45,7 @@ tu102_fb = {
 int
 tu102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)
 {
-	return gp102_fb_new_(&tu102_fb, device, type, inst, pfb);
+	return gf100_fb_new_(&tu102_fb, device, type, inst, pfb);
 }
 
 MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin");
-- 
2.35.1


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux