On Thu, Mar 03, 2022 at 09:33:06PM +0000, Matias Bjørling wrote: > > -----Original Message----- > > From: Adam Manzanares <a.manzanares@xxxxxxxxxxx> > > However, an end-user application should not (in my opinion) have to deal > > with this. It should use helper functions from a library that provides the > > appropriate abstraction to the application, such that the applications don't > > have to care about either specific zone capacity/size, or multiple resets. This is > > similar to how file systems work with file system semantics. For example, a file > > can span multiple extents on disk, but all an application sees is the file > > semantics. > > > > > > > I don't want to go so far as to say what the end user application should and > > should not do. > > Consider it as a best practice example. Another typical example is > that one should avoid extensive flushes to disk if the application > doesn't need persistence for each I/O it issues. Although I was sad to see there was no raw access to a block zoned storage device, the above makes me kind of happy that this is the case today. Why? Because there is an implicit requirement on management of data on zone storage devices outside of regular storage SSDs, and if its not considered and *very well documented*, in agreement with us all, we can end up with folks slightly surprised with these requirements. An application today can't directly manage these objects so that's not even possible today. And in fact it's not even clear if / how we'll get there. So in the meantime the only way to access zones directly, if an application wants anything close as possible to the block layer, the only way is through the VFS through zonefs. I can hear people cringing even if you are miles away. If we want an improvement upon this, whatever API we come up with we *must* clearly embrace and document the requirements / responsiblities above.