Re: [LSF/MM/BPF BoF]: A host FTL for zoned block devices using UBLK

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 07, 2023 at 04:01:41PM +0530, Nitesh Shetty wrote:
> On Mon, Feb 06, 2023 at 08:49:15PM +0800, Ming Lei wrote:
> > On Mon, Feb 06, 2023 at 10:00:20AM +0000, Hans Holmberg wrote:
> > > I think we're missing a flexible way of routing random-ish
> > > write workloads on to zoned storage devices. Implementing a UBLK
> > > target for this would be a great way to provide zoned storage
> > > benefits to a range of use cases. Creating UBLK target would
> > > enable us experiment and move fast, and when we arrive
> > > at a common, reasonably stable, solution we could move this into
> > > the kernel.
> > 
> > Yeah, UBLK provides one easy way for fast prototype.
> > 
> > > 
> > > We do have dm-zoned [3]in the kernel, but it requires a bounce
> > > on conventional zones for non-sequential writes, resulting in a write
> > > amplification of 2x (which is not optimal for flash).
> > > 
> > > Fully random workloads make little sense to store on ZBDs as a
> > > host FTL could not be expected to do better than what conventional block
> > > devices do today. Fully sequential writes are also well taken care of
> > > by conventional block devices.
> > > 
> > > The interesting stuff is what lies in between those extremes.
> > > 
> > > I would like to discuss how we could use UBLK to implement a
> > > common FTL with the right knobs to cater for a wide range of workloads
> > > that utilize raw block devices. We had some knobs in  the now-dead pblk,
> > > a FTL for open channel devices, but I think we could do way better than that.
> > > 
> > > Pblk did not require bouncing writes and had knobs for over-provisioning and
> > > workload isolation which could be implemented. We could also add options
> > > for different garbage collection policies. In userspace it would also 
> > > be easy to support default block indirection sizes, reducing logical-physical
> > > translation table memory overhead.
> > > 
> > > Use cases for such an FTL includes SSD caching stores such as Apache
> > > traffic server [1] and CacheLib[2]. CacheLib's block cache and the apache
> > > traffic server storage workloads are *almost* zone block device compatible
> > > and would need little translation overhead to perform very well on e.g.
> > > ZNS SSDs.
> > > 
> > > There are probably more use cases that would benefit.
> > > 
> > > It would also be a great research vehicle for academia. We've used dm-zap
> > > for this [4] purpose the last couple of years, but that is not production-ready
> > > and cumbersome to improve and maintain as it is implemented as a out-of-tree
> > > device mapper.
> > 
> > Maybe it is one beginning for generic open-source userspace SSD FTL,
> > which could be useful for people curious in SSD internal. I have
> > google several times for such toolkit to see if it can be ported to
> > UBLK easily. SSD simulator isn't great, which isn't disk and can't handle
> > real data & workloads. With such project, SSD simulator could be less
> > useful, IMO.
> > 
> > > 
> > > ublk adds a bit of latency overhead, but I think this is acceptable at least
> > > until we have a great, proven solution, which could be turned into
> > > an in-kernel FTL.
> > 
> > We will keep improving ublk io path, and I am working on ublk
> > copy. Once it is done, big chunk IO latency could be reduced a lot.
> > 
> 
> Just curious, will this also involve running do_splice_direct*() in async style
> like normal async read/write, instead of offloading to iowq context ?

Follows the idea:

- adding new type of buffer(splice buffer) to io_uring, this
buffer will be populated into bvec table(reusing io_mapped_ubuf) by
passing (splice_fd, offset, len) from SQE.

- The buffer is filled from ublk ->read_splice() with help of
splice_direct_to_actor() over direct pipe, probably we can add one
private splice flag to just allow ublk ->read_splice() to be available
in kernel(io_uring) & direct pipe

- It requires the pipe buffer ownership not transferred, so nop_pipe_buf_ops
is needed for such usage, and this way is pretty fine for ublk & fuse.

- The buffer can be allocated & populated from ->prep() of io_uring rw/net,
then handled just like READ[WRITE]_FIXED.

So it is like normal async read/write, then two pin pages are avoided,
and one time of io data copy is saved.

This way is also flexible to allow read/write over any part of the buffer.

Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux