On Tue, Sep 15, 2015 at 09:23:44AM +0200, Neil Brown wrote: > Shaohua Li <shli@xxxxxx> writes: > > > On 9/11/15, 11:17 PM, "Christoph Hellwig" <hch@xxxxxx> wrote: > > > >>Hi Shaohua, hi Neil, > >> > >>this series contains a few updates to the raid5-cache feature. > >> > >>The first patch just ports it to the post-4.2 block layer. As part of > >>that > >>I noticed that it currently doesn't handle I/O errors - fixes for that > >>will > >>follow. > >> > >>The second and third patch simplify the I/O unit state machine and reduce > >>latency and memory usage for the I/O units. The remainder are just a > >>couple > >>of cleanups in this area that I stumbled upon. > >> > >>Changes since V1: > >> - only use REQ_FUA if supported natively by the log device > > > > Hi Christoph, > > > > I finally got some data with a Samsung SSD, which supports fua. Controller > > is ahci. > > Test is a simple fio with all full stripe write. > > > > libata.fua=0, throughput 247m/s > > libata.fua=1, throughput 74m/s > > Eek! That's a big price to pay! > > > > > fua is significantly slower. I think we need a sysfs config to enable fua. > > I don't want a sysfs config if we can possibly avoid it. > > Christoph's code sets FUA on every block written to the log, both data > and metadata. Is that really what we want? > > I don't know much of the hardware details, but wouldn't setting FUA and > FLUSH on the last block written be just as effective and possibly faster > (by giving more flexibility to lower layers)?? How is it different against without FUA, eg, doing a flush after several bios? Thanks, Shaohua -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html