Re: XFS and nobarrier with SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Christoph Hellwig wrote:
The rule of thumb is: if nobarrier makes your workload run faster you
should not be using it, aka: don't use it.
----
	So what is the purpose of the switch if it is to only
be used when it makes no difference?

I.e. My raid controller does write-through if it's internal
battery needs replacing, otherwise, it does write-back.

On top of that my system is on a UPS that is good for a hour or more
of running.
So, I used to use nobarrier on "work" disks where there were likely
to be alot of "writes".  Those disks are also backed up daily via
xfsdump/restore.  I figured those would benefit most, and at worst
I could restore to previous morning's backup.

Eventually stopped using the option, as for the most part, I couldn't
really measure any reliable difference in performance (which means
I should use it?!?).

Hmmm...

The only times I have experienced disk corruption on a single
disk were either back before I ever tried the option, or when
I had several months to a year where I tried to use software
RAID5 (several-10+ years ago, before it was possible to use
multiple cores for doing some RAID operations).

I doubt I'm going to try it again soon, but being told that
it's only "ok" to use an option when it makes no difference
in performance *sounds* more than a little confusing.
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux