Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil Brown wrote:
On Monday May 28, davidsen@xxxxxxx wrote:
There are two things I'm not sure you covered.

First, disks which don't support flush but do have a "cache dirty" status bit you can poll at times like shutdown. If there are no drivers which support these, it can be ignored.

There are really devices like that?  So to implement a flush, you have
to stop sending writes and wait and poll - maybe poll every
millisecond?

Yes, there really are (or were). But I don't think that there are drivers, so it's not an issue.
That wouldn't be very good for performance....  maybe you just
wouldn't bother with barriers on that sort of device?

That is why there are no drivers...
Which reminds me:  What is the best way to turn off barriers?
Several filesystems have "-o nobarriers" or "-o barriers=0",
or the inverse.

If they can function usefully without, the admin gets to make that choice.
md/raid currently uses barriers to write metadata, and there is no
way to turn that off.  I'm beginning to wonder if that is best.

I don't see how you can have reliable operation without it, particularly WRT bitmap.
Maybe barrier support should be a function of the device.  i.e. the
filesystem or whatever always sends barrier requests where it thinks
it is appropriate, and the block device tries to honour them to the
best of its ability, but if you run
   blockdev --enforce-barriers=no /dev/sda
then you lose some reliability guarantees, but gain some throughput (a
bit like the 'async' export option for nfsd).

Since this is device dependent, it really should be in the device driver, and requests should have status of success, failure, or feature unavailability.


Second, NAS (including nbd?). Is there enough information to handle this  "really right?"

NAS means lots of things, including NFS and CIFS where this doesn't
apply.

Well, we're really talking about network attached devices rather than network filesystems. I guess people do lump them together.

For 'nbd', it is entirely up to the protocol.  If the protocol allows
a barrier flag to be sent to the server, then barriers should just
work.  If it doesn't, then either the server disables write-back
caching, or flushes every request, or you lose all barrier
guarantees.

Pretty much agrees with what I said above, it's at a level closer to the device, and status should come back from the physical i/o request.
For 'iscsi', I guess it works just the same as SCSI...

Hopefully.

--
bill davidsen <davidsen@xxxxxxx>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux