Hi,
I saw that v2 (?) of this patch has made it into stable, which
is quite reasonable given the number of bug reports.
Are there any plans to "enhance" this patch once sufficient data
on controller support/drive combinations has been collected?
I didn't run any benchmarks to see whether performance has changed,
but I now have this on 5.14.6:
/sys/class/ata_device/dev3.0/trim:forced_unqueued
/sys/class/ata_device/dev4.0/trim:forced_unqueued
Before:
/sys/class/ata_device/dev3.0/trim:queued
/sys/class/ata_device/dev4.0/trim:queued
These correspond to 860 Pro and 860 Evo, connected to a X570
mainboard (AMD FCH controller).
Note that neither before nor after this commit I had any problems
with these drives.
On 03.09.21 03:21, Martin K. Petersen wrote:
Hans,
I just realized that all newer Samsung models are non SATA...
Still I cponsider it likely that some of the other vendors also
implement queued trim support and there are no reports of issues
with the other vendors' SSDs.
When I originally worked on this the only other drive that supported
queued trim was a specific controller generation from Crucial/Micron.
Since performance-sensitive workloads quickly moved to NVMe, I don't
know if implementing queued trim has been very high on the SSD
manufacturers' todo lists. FWIW, I just checked and none of the more
recent SATA SSD drives I happen to have support queued trim.
Purely anecdotal: I have a Samsung 863 which I believe is
architecturally very similar to the 860. That drive clocked over 40K
hours as my main git repo/build drive until it was retired last
fall. And it ran a queued fstrim every night.
Anyway. I am not against disabling queued trim for these drives. As far
as I'm concerned it was a feature that didn't quite get enough industry
momentum. It just irks me that we don't have a good understanding of why
it works for some and not for others...