[LSF/MM ATTEND] Persistent Memory, SMR drives, blk-mq, O_ATOMIC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'd like to attend LSF/MM this year to discuss current work on preparing
(or removing) the I/O stack for upcoming hardware such as Shingled
Magnetic Recording (SMR) drives and Persistent Memory (PM).  I currently
have access to some prototype NVDIMMs, for which I've written a basic
block driver for testing purposes.  I'm also a member of SNIA's
Non-Volatile Memory Programming Technical Working Group (NVMP-TWG), so I
can provide insight into the discussions happening there.  I've also
consumed a large body of research on storage class memory programming
models.

On the SMR front, I'm interested in discussing the problems posed and
the correct place to implement the solutions.  I think we can expect
strict mode (or ZBC mode) devices in the near future, and having a plan
on how to access them is a necessity.  We could implement a
device-mapper target, for example, that hides the SMR properties of the
device.  Or, we could modify each file system to cope with the devices.
Or, we could create yet another file system (yuck!).  Discussions on the
intended use cases would also be great, as I've heard mixed messages on
archival versus general purpose usage.

For blk-mq, I'm primarily interested in hearing from the folks who are
modifying their drivers to work within the framework (NVMe and scsi,
specifically), the problems they've encountered, and the performance
gain to be had (if any).

If there's interest from Chris, I'd like to discuss his proposed
O_ATOMIC support and the various ways we might expose the limitations
(on granularities, in the presence of stacking) to userspace.  Also, as
Dave Chinner had mentioned, O_ATOMIC could be made available for all
devices and file systems with a dm target or in-kernel library that
implements some form of logging (which could very well get rid of said
limitations, at the cost of performance).  A snippet of Dave's
suggestion is included below.

Cheers,
Jeff

Quoth Dave:

"Indeed, what I'd really like to be able to do from a filesystem
perspective is to be able to issue a group of related metadata IO as
an atomic write rather than marshaling it through a journal and then
issuing them as unrelated IO. If we have a special dm-target
underneath that can either issue it as an atomic write (if the
hardware supports it) or emulate it via a journal to maintain
multi-device atomicity requirements then we end up with a general
atomic write solution that filesystems can then depend on.

Once we have guaranteed support for atomic writes, then we can 
completely remove journalling from filesystem transaction engines
as the atomicity requirements can be met with atomic writes. An then
we can optimise things like fsync() for atomic writes.

IOWs, generic support for atomic writes will make a major difference
to filesystem algorithms. Hence, from my perspective, at this early
point in the development lifecycle having guaranteed atomic write
support via emulation is far more important than actually having
hardware that supports it... :)"
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux