Re: Is it possible that certain physical disk doesn't implement flush correctly?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/3/31 下午9:36, Hannes Reinecke wrote:
> On 3/31/19 2:00 PM, Qu Wenruo wrote:
>>
>>
>> On 2019/3/31 下午7:27, Alberto Bursi wrote:
>>>
>>> On 30/03/19 13:31, Qu Wenruo wrote:
>>>> Hi,
>>>>
>>>> I'm wondering if it's possible that certain physical device doesn't
>>>> handle flush correctly.
>>>>
>>>> E.g. some vendor does some complex logical in their hdd controller to
>>>> skip certain flush request (but not all, obviously) to improve
>>>> performance?
>>>>
>>>> Do anyone see such reports?
>>>>
>>>> And if proves to happened before, how do we users detect such problem?
>>>>
>>>> Can we just check the flush time against the write before flush call?
>>>> E.g. write X random blocks into that device, call fsync() on it, check
>>>> the execution time. Repeat Y times, and compare the avg/std.
>>>> And change X to 2X/4X/..., repeat above check.
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>
>>>
>>> Afaik HDDs and SSDs do lie to fsync()
>>
>> fsync() on block device is interpreted into FLUSH bio.
>>
>> If all/most consumer level SATA HDD/SSD devices are lying, then there is
>> no power loss safety at all for any fs. As most fs relies on FLUSH bio
>> to implement barrier.
>>
>> And for fs with generation check, they all should report metadata from
>> the future every time a crash happens, or even worse gracefully
>> umounting fs would cause corruption.
>>
> Please, stop making assumptions.

I'm not.

> 
> Disks don't 'lie' about anything, they report things according to the
> (SCSI) standard.
> And the SCSI standard has two ways of ensuring that things are written
> to disk: the SYNCHRONIZE_CACHE command and the FUA (force unit access)
> bit in the command.

I understand FLUSH and FUA.

> The latter provides a way of ensuring that a single command made it to
> disk, and the former instructs the driver to:
> 
> "a) perform a write medium operation to the LBA using the logical block
> data in volatile cache; or
> b) write the logical block to the non-volatile cache, if any."
> 
> which means it's perfectly fine to treat the write-cache as a
> _non-volative_ cache if the RAID HBA is battery backed, and thus can
> make sure that outstanding I/O can be written back even in the case of a
> power failure.
> 
> The FUA handling, OTOH, is another matter, and indeed is causing some
> raised eyebrows when comparing it to the spec. But that's another story.

I don't care FUA as much, since libata still doesn't support FUA by
default and interpret it as FLUSH/WRITE/FLUSH, so it doesn't make things
worse.

I'm more interesting in, are all SATA/NVMe disks follows this FLUSH
behavior?

For most case, I believe it is, or whatever the fs is, either CoW based
or journal based, we're going to see tons of problems, even gracefully
unmounted fs can have corruption if FLUSH is not implemented well.

I'm interested in, is there some device doesn't completely follow
regular FLUSH requirement, but do some tricks, for certain tested fs.

E.g. the disk is only tested for certain fs, and that fs always does
something like flush, write flush, fua.
In that case, if the controller decides to skip the 2nd flush, but only
do the first flush and fua, if the 2nd write is very small (e.g.
journal), the chance of corruption is pretty low due to the small window.

In that case, the disk could perform a little better, with increase
corruption possibility.

I just want to wipe out this case.

Thanks,
Qu

> 
> Cheers,
> 
> Hannes

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux