On Sun, May 3, 2020 at 5:57 AM David Laight <David.Laight@xxxxxxxxxx> wrote: > > From: Linus Torvalds > > Sent: 01 May 2020 19:29 > ... > > And as DavidL pointed out - if you ever have "iomem" as a source or > > destination, you need yet another case. Not because they can take > > another kind of fault (although on some platforms you have the machine > > checks for that too), but because they have *very* different > > performance profiles (and the ERMS "rep movsb" sucks baby donkeys > > through a straw). > > > I was actually thinking that the nvdimm accesses need to be treated > much more like (cached) memory mapped io space than normal system > memory. > So treating them the same as "iomem" and then having access functions > that report access failures (which the current readq() doesn't) > might make sense. While I agree that something like copy_mc_iomem_to_{user,kernel} could have users, nvdimm is not one of them. > If you are using memory that 'might fail' for kernel code or data > you really get what you deserve. nvdimms are no less "might fail" than DRAM, recall that some nvdimms are just DRAM with a platform promise that their contents are battery backed. > OTOH system response to PCIe errors is currently rather problematic. > Mostly reads time out and return ~0u. > This can be checked for and, if possibly valid, a second location read. Yes, the ambiguous ~0u return needs careful handling.