Re: ordered I/O with multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 8, 2009 at 10:30 PM, Jamie Lokier <jamie@xxxxxxxxxxxxx> wrote:
> 谢纲 wrote:
>> Some journal filesystem use barrier i/o to ensure the order of the
>> committing data. But if the filesystem is on the top of volume manager
>> which support the raid and multipath. The barrier i/o might not be
>> handled correctly. How does journal filesystem deal with this?
>
> For software RAID and multipath, I think it isn't handled at all.
>
> Even if you disable write-caching in the underlying storage, ordered
> requests may not retain their order, so the common database advice to
> disable write-cache and use SCSI or SATA-NCQ may not work either.
>
> If the RAID code is changed to handle barriers, that would still have
> possible "scattershot" corruption on RAID-5, because writing a single
> sector on the logical device affects more than one visible sector if
> it is interrupted.  In other words, the "radius of corruption" is
> bigger than one sector for RAID-5, and it's not contiguous either.
If there is volume manager, which control the raid and could
understand the multipath, I think the barriers can be handled.
Because, the it can get all the information about where those i/o
goes. But it's very complicated to handle all of this.
It's said that the Veritas volume manager could handle this. I don't
know whether it's true. but according to the linux block layer, it's
really had to implement this.
>
> In principle, journalling filesystems need to know the "radius of
> corruption" to provide robust journalling.  If individual sector
> writes are atomic, this isn't an issue.  Some people think sector
> writes are atomic on modern hard drives (but I wouldn't count on it).
> But it is definitely not atomic when writing to a RAID or multipath if
> the write affects more than one device.
>
> -- Jamie
>



-- 
Xie Gang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux