On 04.02.25 22:06, Asahi Lina wrote:
On 2/5/25 5:10 AM, David Hildenbrand wrote:
On 04.02.25 18:59, Asahi Lina wrote:
On 2/4/25 11:38 PM, David Hildenbrand wrote:
If the answer is "no" then that's fine. It's still an unsafe function
and we need to document in the safety section that it should only be
used for memory that is either known to be allocated and pinned and
will
not be freed while the `struct page` is borrowed, or memory that is
reserved and not owned by the buddy allocator, so in practice correct
use would not be racy with memory hot-remove anyway.
This is already the case for the drm/asahi use case, where the pfns
looked up will only ever be one of:
- GEM objects that are mapped to the GPU and whose physical pages are
therefore pinned (and the VM is locked while this happens so the
objects
cannot become unpinned out from under the running code),
How exactly are these pages pinned/obtained?
Under the hood it's shmem. For pinning, it winds up at
`drm_gem_get_pages()`, which I think does a `shmem_read_folio_gfp()` on
a mapping set as unevictable.
Thanks. So we grab another folio reference via shmem_read_folio_gfp()-
shmem_get_folio_gfp().
Hm, I wonder if we might end up holding folios residing in ZONE_MOVABLE/
MIGRATE_CMA longer than we should.
Compared to memfd_pin_folios(), which simulates FOLL_LONGTERM and makes
sure to migrate pages out of ZONE_MOVABLE/MIGRATE_CMA.
But that's a different discussion, just pointing it out, maybe I'm
missing something :)
I think this is a little over my head. Though I only just realized that
we seem to be keeping the GEM objects pinned forever, even after unmap,
in the drm-shmem core API (I see no drm-shmem entry point that would
allow the sgt to be freed and its corresponding pages ref to be dropped,
other than a purge of purgeable objects or final destruction of the
object). I'll poke around since this feels wrong, I thought we were
supposed to be able to have shrinker support for swapping out whole GPU
VMs in the modern GPU MM model, but I guess there's no implementation of
that for gem-shmem drivers yet...?
I recall that shrinker as well, ... or at least a discussion around it.
[...]
If it's only for crash dumps etc. that might even be opt-in, it makes
the whole thing a lot less scary. Maybe this could be opt-in somewhere,
to "unlock" this interface? Just an idea.
Just to make sure we're on the same page, I don't think there's anything
to unlock in the Rust abstraction side (this series). At the end of the
day, if nothing else, the unchecked interface (which the regular
non-crash page table management code uses for performance) will let you
use any pfn you want, it's up to documentation and human review to
specify how it should be used by drivers. What Rust gives us here is the
mandatory `unsafe {}`, so any attempts to use this API will necessarily
stick out during review as potentially dangerous code that needs extra
scrutiny.
For the client driver itself, I could gate the devcoredump stuff behind
a module parameter or something... but I don't think it's really worth
it. We don't have a way to reboot the firmware or recover from this
condition (platform limitations), so end users are stuck rebooting to
get back a usable machine anyway. If something goes wrong in the
crashdump code and the machine oopses or locks up worse... it doesn't
really make much of a difference for normal end users. I don't think
this will ever really happen given the constraints I described, but if
somehow it does (some other bug somewhere?), well... the machine was
already in an unrecoverable state anyway.
It would be nice to have userspace tooling deployed by default that
saves off the devcoredump somewhere, so we can have a chance at
debugging hard-to-hit firmware crashes... if it's opt-in, it would only
really be useful for developers and CI machines.
Is this something that possibly kdump can save or analyze? Because that
is our default "oops, kernel crashed, let's dump the old content so we
can dump it" mechanism on production systems.
kdump does not work on Apple ARM systems because kexec is broken and
cannot be fully fixed, due to multiple platform/firmware limitations. A
very limited version of kexec might work well enough for kdump, but I
don't think anyone has looked into making that work yet...
but ... I am not familiar with devcoredump. So I don't know when/how it
runs, and if the source system is still alive (and remains alive -- in
contrast to a kernel crash).
Devcoredump just makes the dump available via /sys so it can be
collected by the user. The system is still alive, the GPU is just dead
and all future GPU job submissions fail. You can still SSH in or (at
least in theory, if enough moving parts are graceful about it) VT-switch
to a TTY. The display controller is not part of the GPU, it is separate
hardware.
Thanks for all the details (and sorry for the delay, I'm on PTO until
Monday ... :)
(regarding the other mail) Adding that stuff to rust just so we have a
devcoredump that ideally wouldn't exist is a bit unfortunate.
So I'm curious: we do have /proc/kcore, where we do all of the required
filtering, only allowing for reading memory that is online, not
hwpoisoned etc.
makedumpfile already supports /proc/kcore.
Would it be possible to avoid Devcoredump completely either by dumping
/proc/kcore directly or by having a user-space script that walks the
page tables to dump the content purely based on /proc/kcore?
If relevant memory ranges are inaccessible from /proc/kcore, we could
look into exposing them.
--
Cheers,
David / dhildenb