On 12/14, Christoph Hellwig wrote: > On Wed, Dec 13, 2023 at 08:41:32AM -0800, Jaegeuk Kim wrote: > > I don't have any > > concern to keep the same ioprio on writes, since handheld devices are mostly > > sensitive to reads. So, if you have other use-cases using zoned writes which > > require different ioprio on writes, I think you can suggest a knob to control > > it by users. > > Get out of your little handheld world. In Linux we need a generally usable > I/O stack, and any feature exposed by the kernel and will be used quite > differently than you imagine. > > Just like people will add reordering to the I/O stack that's not there > right now (in addition to the ones your testing doesn't hit). That > doensn't mean we should avoid them - you genereally get better performance > by not reordering without a good reason (like thotting), but especially > in error handling paths or resource constrained environment they will > hapen all over. We've had this whole discussion with the I/O barriers > that did not work for exactly the same reasons. > > > > > > > > > > it is essential to place the data per file to get better bandwidth. And for > > > > NAND-based storage, filesystem is the right place to deal with the more efficient > > > > garbage collecion based on the known data locations. > > > > > > And that works perfectly fine match for zone append. > > > > How that works, if the device gives random LBAs back to the adjacent data in > > a file? And, how to make the LBAs into the sequential ones back? > > Why would your device pick random LBAs? If you send a zone append to > zone it will be written at the write pointer, which is absolutely not > random. All I/O written in a single write is going to be sequential, > so just like for all other devices doing large sequential writes is > important. Multiple writes can get reordered, but if you havily hit > the same zone you'd get the same effect in the file system allocator > too. How can you guarantee the device does not give any random LBAs? What'd be the selling point of zone append to end users? Are you sure this can give the better write trhought forever? Have you considered how to implement this in device side such as FTL mapping overhead and garbage collection leading to tail latencies? My takeaway on the two approaches would be: zone_append zone_write ----------- ---------- LBA from FTL from filesystem FTL mapping Page-map Zone-map SRAM/DRAM needs Large Small FTL GC Required Not required Tail latencies Exist Not exisit GC Efficience Worse Better Longevity As-is Longer Discard cmd Required Not required Block complexity Small Large Failure cases Less exist Exist Fsck Don't know F2FS-TOOLS support Filesystem BTRFS support(?) F2FS support Given this, I took zone_write, especially for mobile devices, since we can recover the unaligned writes in the corner cases by fsck. And, most benefit would be getting rid of FTL mapping overhead which improves random read IOPs significantly due to the lack of SRAM in low-end storages. And, longer lifetime by mitigating garbage collection overhead is more important in mobile world. If there's any flag or knob that we can set, IMO, that'd be enough. > > > Sorry, I needed to stop reading here, as you're totally biased. This is not > > the case in JEDEC, as Bart spent multiple years to synchronize the technical > > benefitcs that we've seen across UFS vendors as well as OEMs. > > *lol* There is no more fucked up corporate pressure standard committee > than the storage standards in JEDEC. That's why not one actually takes > them seriously.