On Tue, 2024-05-21 at 00:01 -0300, Wedson Almeida Filho wrote: > On Mon, 20 May 2024 at 23:07, Dave Airlie <airlied@xxxxxxxxx> wrote: > > > > > > > > Wedson wrote a similar abstraction in the past > > > (`rust/kernel/io_mem.rs` in the old `rust` branch), with a > > > compile-time `SIZE` -- it is probably worth taking a look. > > > > > > > Just on this point, we can't know in advance what size the IO BARs > > are > > at compile time. > > > > The old method just isn't useful for real devices with runtime IO > > BAR sizes. > > The compile-time `SIZE` in my implementation is a minimum size. > > Attempts to read/write with constants within that size (offset + > size) > were checked at compile time, that is, they would have zero > additional > runtime cost when compared to C. Reads/writes beyond the minimum > would > have to be checked at runtime. > We looked at this implementation Its disadvantage is that it moves the responsibility for setting that minimum size to the driver programmer. Andreas Hindborg is using that currently for rnvme [1]. I believe that the driver programmer in Rust should not be responsible for controlling such sensitive parameters (one could far more easily provide invalid values), but the subsystem (e.g. PCI) should do it, because it knows about the exact resource lengths. The only way to set the actual, real value is through subsystem code. But when we (i.e., currently, the driver programmer) have to use that anyways, we can just use it from the very beginning and have the exact valid parameters. So driver programmers would always have correct IoMem and would have a harder time breaking stuff, and wouldn't have to think about minimum lengths and things like that. P. [1] https://github.com/metaspace/linux/blob/rnvme/drivers/block/rnvme.rs#L580