On 10/22/22 06:02, Maciej S. Szmigiero wrote: > On 21.10.2022 07:38, Damien Le Moal wrote: >> These patches cleanup and improve libata support for the FUA device >> feature. Patch 3 enables FUA support by default for any drive that >> reports supporting the feature. >> >> Damien Le Moal (2): >> ata: libata: cleanup fua handling >> ata: libata: Enable fua support by default >> >> Maciej S. Szmigiero (1): >> ata: libata: allow toggling fua parameter at runtime >> > > Thanks Damien for the series! > > I've looked at the code changes and have basically two points: > 1) There seems to be no way to revalidate the FUA setting for an existing > disk, since it is now only taken into account in ata_dev_config_fua(). > > As far as I can see, this function is only called on probe paths > (and during exception handling), so if the "libata.fua" parameter is > toggled the new setting would only affect newly (re-)attached disks. Yes. Indeed. Forcing an ATA revalidation needs some more trickery as the regular sd_revalidate() does not lead to ata_dev_configure() being called again. > Previously, this parameter was read directly in ata_scsiop_mode_sense() > (specifically in ata_dev_supports_fua() called from there), which could > be called to re-compute the FUA setting for an existing disk by > re-writing the "cache_type" sysfs attribute (as described in my commit > message). > > If that's indeed the case this severely limits the usefulness of having > this parameter runtime-writable, and I agree with your discussion with > Hannes that it isn't probably needed now (after all, probably nobody > has an explicit "libata.fua=0" in their kernel command line, since this > was the default setting anyway). OK. Then I will drop your patch. Safer that way. > 2) It would be good to collect known-broken disks from the similar FUA > enabling attempt in 2012 [1] and add them to the blacklist upfront, so > these users won't have to report them again. The code only had one Maxtor drive blacklisted for FUA. Patch one adds it to the horkage table. > > The problematic disks reported in that thread were: >> ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) >> ata1.00: ATA-7: WDC WD2500JS-41MVB1, 10.02E01, max UDMA/133 >> ata1.00: 488397168 sectors, multi 16: LBA48 >> ata1.00: configured for UDMA/133 >> scsi 0:0:0:0: Direct-Access ATA WDC WD2500JS-41M 10.0 PQ: 0 ANSI: 5 > >> [ 2.845750] ata1.00: ATA-9: OCZ-VERTEX3 MI, 2.06, max UDMA/133 >> [ 2.845754] ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32), AA >> [ 2.865726] ata1.00: configured for UDMA/133 >> [ 2.865955] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX3 MI 2.06 PQ: 0 ANSI: 5 >> [ 2.866722] sd 0:0:0:0: [sda] 234441648 512-byte logical blocks: (120 GB/111 GiB) > >> [ 3.934157] ata1.00: ATA-9: INTEL SSDSC2CT120A3, 300i, max UDMA/133 >> [ 3.934266] ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 0/32) >> [ 3.954145] ata1.00: configured for UDMA/133 >> [ 3.954441] scsi 0:0:0:0: Direct-Access ATA INTEL SSDSC2CT12 >> 300i PQ: 0 ANSI: 5 >> [ 3.955233] sd 0:0:0:0: [sda] 234441648 512-byte logical blocks: (120 >> GB/111 GiB) OK. I will check that thread and add these drives to the horkage list. Thanks ! > > Thanks, > Maciej > > [1]: https://lore.kernel.org/lkml/CA+6av4=uxu_q5U_46HtpUt=FSgbh3pZuAEY54J5_xK=MKWq-YQ@xxxxxxxxxxxxxx/ > -- Damien Le Moal Western Digital Research