> Date: Fri, 21 Oct 2022 21:32:48 +0200 > From: Ondřej Jirman <megi@xxxxxx> > > On Fri, Oct 21, 2022 at 12:48:15PM -0400, Peter Geis wrote: > > On Fri, Oct 21, 2022 at 11:39 AM Ondřej Jirman <megi@xxxxxx> wrote: > > > > > > On Fri, Oct 21, 2022 at 09:07:50AM -0400, Peter Geis wrote: > > > > Good Morning Heiko, > > > > > > > > Apologies for just getting to this, I'm still in the middle of moving > > > > and just got my lab set back up. > > > > > > > > I've tested this patch series and it leads to the same regression with > > > > NVMe drives. A loop of md5sum on two identical 4GB random files > > > > produces the following results: > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img > > > > fad97e91da8d4fd554c895cafa89809b test-rand2.img > > > > 2d56a7baa05c38535f4c19a2b371f90a test-rand.img > > > > 74e8e6f93d7c3dc3ad250e91176f5901 test-rand2.img > > > > 25cfcfecf4dd529e4e9fbbe2be482053 test-rand.img > > > > 74e8e6f93d7c3dc3ad250e91176f5901 test-rand2.img > > > > b9637505bf88ed725f6d03deb7065dab test-rand.img > > > > f7437e88d524ea92e097db51dce1c60d test-rand2.img > > > > > > > > Before this patch series: > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img > > > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img > > > > > > > > Though I do love where this patch is going and would like to see if it > > > > can be made to work, in its current form it does not. > > > > > > Thanks for the test. Can you please also test v1? Also please share lspci -vvv > > > of your nvme drive, so that we can see allocated address ranges, etc. > > > > Good catch, with your patch as is, the following issue crops up: > > Region 0: Memory at 300000000 (64-bit, non-prefetchable) [size=16K] > > Region 2: I/O ports at 1000 [disabled] [size=256] > > > > However, with a simple fix, we can get this: > > Region 0: Memory at 300000000 (64-bit, non-prefetchable) [virtual] [size=16K] > > Region 2: I/O ports at 1000 [virtual] [size=256] > > > > and with it a working NVMe drive. > > > > Change the following range: > > 0x02000000 0x0 0x40000000 0x3 0x00000000 0x0 0x40000000>; > > to > > 0x02000000 0x0 0x00000000 0x3 0x00000000 0x0 0x40000000>; > > I've already tried this, but this unfrotunately breaks the wifi cards. > (those only use the I/O space) Maybe because I/O and memory address spaces > now overlap, I don't know. That's why I used the 1GiB offset for memory > space. Meanwhile, I have an NVMe drive that only works if mmio is completely untranslated. This is an ADATA SX8000NP drive, which uses a Silicon Motion SM2260 controller. So for me, a working configuration has the following "ranges": ranges = <0x01000000 0x0 0x00000000 0x3 0x3fff0000 0x0 0x00010000>, <0x02000000 0x0 0xf4000000 0x0 0xf4000000 0x0 0x02000000>, <0x03000000 0x3 0x10000000 0x3 0x10000000 0x0 0x2fff0000>; This also needs changes to the "reg" propery: reg = <0x3 0xc0000000 0x0 0x00400000>, <0x0 0xfe260000 0x0 0x00010000>, <0x3 0x00000000 0x0 0x10000000>; Now admittedly, this is with OpenBSD running on EDK2 UEFI firmware from https://github.com/jaredmcneill/quartz64_uefi that I modified to pass through the device tree and modify the ranges as above. But the way my OpenBSD driver sets up the address translation windows matches what the mainline Linux driver does. I picked the ranges above to match the EDK2 configuration. But it is a setup that maximizes the 32-bit mmio window. Cheers, Mark > > I still haven't tested this with other cards yet, and another patch > > that does similar work I've tested successfully as well with NVMe > > drives. I'll have to get back to you on the results of greater > > testing. > > > > Very Respectfully, > > Peter Geis > > > > > > > > kind regards, > > > o. > > > > > > > Very Respectfully, > > > > Peter Geis > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@xxxxxxxxxxxxxxxxxxx > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel