Re: raid5-cache I/O path improvements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 08, 2015 at 08:12:15AM +0200, Christoph Hellwig wrote:
> On Mon, Sep 07, 2015 at 05:28:55PM -0700, Shaohua Li wrote:
> > Hi Christoph,
> > Thanks for these work. Yes, I/O error handling is in the plan. We could
> > simplify panic (people here like this option) or report error and bypass
> > log. Any way an option is good.
> 
> I think the sensible thing in general is to fail the I/O.  Once we have
> a cache devie the assumption is that a) write holes are properly handled,
> and we b) do all kinds of optimizations based on the presensce of the
> log device like not passing through flush requests or skippign resync.
> 
> Having the cache device suddenly disappear will alwasy break a) and
> require a lot of hairy code only used in failure cases to undo the
> rest.

Failing the I/O is ok too.
 
> > For the patches, FUA write does simplify things a lot. However, I tried
> > it before, the performance is quite bad in SSD. FUA is off in SATA by
> > default, the emulation is farily slow because FLUSH request isn't NCQ
> > command. I tried to enable FUA in SATA too, FUA write is still slow in
> > the SSD I tested. Other than this one, other patches look good:
> 
> Pretty much every SSD (and modern disk driver) supports FUA.  Please
> benchmark with libata.fua=Y, as I think the simplifcation is absolutely
> worth it.  On my SSDs using it gives far lower latency for writes,
> nevermind nvmdimm where it's also essential as the flush statemchine
> increases the write latency by an order of magnitude.
> 
> Tejun, do you have any updates on libata vs FUA?  We onable it
> by default for a while in 2012, but then Jeff reverted it with a rather
> non-descriptive commit message.
> 
> Also NVMe or SAS SSDs will benefit heavily from the FUA bit.

I agree the benefit of FUA. In the system I'm testing, an Intel ssd
supports FUA, a sandisk SSD doesn't support FUA (this is the SSD we will
deploy for the log). This is AHCI with libata.fua=1. FUA isn't supported
by every SSD. If the log uses FUA by default, we will have a lot of FUA
write and performance is impacted.

I'll benchmark on a SSD from another vendor, which supports FUA, but FUA
write has poor performance in my last test.

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux