On Tue, Sep 04, 2018 at 09:26:00AM +0100, Richard W.M. Jones wrote: > On Tue, Sep 04, 2018 at 10:49:40AM +1000, Dave Chinner wrote: > > On Mon, Sep 03, 2018 at 11:49:19PM +0100, Richard W.M. Jones wrote: > > > [This is silly and has no real purpose except to explore the limits. > > > If that offends you, don't read the rest of this email.] > > > > We do this quite frequently ourselves, even if it is just to remind > > ourselves how long it takes to wait for millions of IOs to be done. > > > > > I am trying to create an XFS filesystem in a partition of approx > > > 2^63 - 1 bytes to see what happens. > > > > Should just work. You might find problems with the underlying > > storage, but the XFS side of things should just work. > > Great! How do you test this normally? The usual: it's turtles all the way down. > I'm assuming you must use a > virtual device and don't have actual 2^6x storage systems around? Right. I use XFS on XFS configurations. i.e. XFS is the storage pool on physical storage (SSDs in RAID0 in this case). The disk images are sparse files w/ extent size hints to minimise fragmentation and allocation overhead. And the QEMU config uses AIO/DIO so it can do concurrent, deeply queued async read/write IO from the guest to the host - the guest block device behaves exactly like it is hosted on real disks. Apart from reflink and extent size hints, I'm using the defaults for everything. > > > I guess this indicates a real bug in mkfs.xfs. > > > > Did it fail straight away? Or after a long time? Can you trap this > > in gdb and post a back trace so we know where it is coming from? > > Yes I think I was far too hasty declaring this a problem with mkfs.xfs > last night. It turns out that NBD on the wire can only describe a few > different errors and maps any other error to -EINVAL, which is likely Urk. It should map them to -EIO, because then we know it's come from the IO layer and isn't a problem related to userspace passing the kernel invalid parameters. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx