Re: [LSF/MM/BPF BoF] BoF for Zoned Storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04.03.2022 14:55, Luis Chamberlain wrote:
On Sat, Mar 05, 2022 at 09:42:57AM +1100, Dave Chinner wrote:
On Fri, Mar 04, 2022 at 02:10:08PM -0800, Luis Chamberlain wrote:
> On Fri, Mar 04, 2022 at 11:10:22AM +1100, Dave Chinner wrote:
> > On Wed, Mar 02, 2022 at 04:56:54PM -0800, Luis Chamberlain wrote:
> > > Thinking proactively about LSFMM, regarding just Zone storage..
> > >
> > > I'd like to propose a BoF for Zoned Storage. The point of it is
> > > to address the existing point points we have and take advantage of
> > > having folks in the room we can likely settle on things faster which
> > > otherwise would take years.
> > >
> > > I'll throw at least one topic out:
> > >
> > >   * Raw access for zone append for microbenchmarks:
> > >   	- are we really happy with the status quo?
> > > 	- if not what outlets do we have?
> > >
> > > I think the nvme passthrogh stuff deserves it's own shared
> > > discussion though and should not make it part of the BoF.
> >
> > Reading through the discussion on this thread, perhaps this session
> > should be used to educate application developers about how to use
> > ZoneFS so they never need to manage low level details of zone
> > storage such as enumerating zones, controlling write pointers
> > safely for concurrent IO, performing zone resets, etc.
>
> I'm not even sure users are really aware that given cap can be different
> than zone size and btrfs uses zone size to compute size, the size is a
> flat out lie.

Sorry, I don't get what btrfs does with zone management has anything
to do with using Zonefs to get direct, raw IO access to individual
zones.

You are right for direct raw access. My point was that even for
filesystem use design I don't think the communication is clear on
expectations. Similar computation need to be managed by fileystem
design, for instance.

Dave,

I understand that you point to ZoneFS for this. It is true that it was
presented at the moment as the way to do raw zone access from
user-space.

However, there is no users of ZoneFS for ZNS devices that I am aware of
(maybe for SMR this is a different story).  The main open-source
implementations out there for RocksDB that are being used in production
(ZenFS and xZTL) rely on either raw zone block access or the generic
char device in NVMe (/dev/ngXnY). This is because having the capability
to do zone management from applications that already work with objects
fits much better.

My point is that there is space for both ZoneFS and raw zoned block
device. And regarding !PO2 zone sizes, my point is that this can be
leveraged both by btrfs and this raw zone block device.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux