On Tue, Sep 8, 2020 at 8:48 PM hch@xxxxxxxxxxxxx <hch@xxxxxxxxxxxxx> wrote: > > On Mon, Sep 07, 2020 at 12:31:42PM +0530, Kanchan Joshi wrote: > > But there are use-cases which benefit from supporting zone-append on > > raw block-dev path. > > Certain user-space log-structured/cow FS/DB will use the device that > > way. Aerospike is one example. > > Pass-through is synchronous, and we lose the ability to use io-uring. > > So use zonefs, which is designed exactly for that use case. Not specific to zone-append, but in general it may not be good to lock new features/interfaces to ZoneFS alone, given that direct-block interface has its own merits. Mapping one file to a one zone is good for some use-cases, but limiting for others. Some user-space FS/DBs would be more efficient (less meta, indirection) with the freedom to decide file-to-zone mapping/placement. - Rocksdb and those LSM style DBs would map SSTable to zone, but SSTable file may be two small (initially) and may become too large (after compaction) for a zone. - The internal parallelism of a single zone is a design-choice, and depends on the drive. Writing multiple zones parallely (striped/raid way) can give better performance than writing on one. In that case one would want to file that seamlessly combines multiple-zones in a striped fashion. Also it seems difficult (compared to block dev) to fit simple-copy TP in ZoneFS. The new command needs: one NVMe drive, list of source LBAs and one destination LBA. In ZoneFS, we would deal with N+1 file-descriptors (N source zone file, and one destination zone file) for that. While with block interface, we do not need more than one file-descriptor representing the entire device. With more zone-files, we face open/close overhead too. -- Joshi