Re: mount options question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 29, 2014 at 7:45 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Fri, Aug 29, 2014 at 07:26:59AM -0400, Greg Freemyer wrote:
>>
>>
>> On August 29, 2014 4:37:38 AM EDT, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> >On Fri, Aug 29, 2014 at 08:31:43AM +0200, Stefan Ring wrote:
>> >> On Thu, Aug 28, 2014 at 1:07 AM, Dave Chinner <david@xxxxxxxxxxxxx>
>> >wrote:
>> >> > On Wed, Aug 27, 2014 at 12:14:21PM +0200, Marko Weber|8000 wrote:
>> >> >>
>> >> >> sorry dave and all other,
>> >> >>
>> >> >> can you guys recommend me the most stable / best mount options for
>> >> >> my new server with ssd愀 and XFS filesystem?
>> >> >>
>> >> >> at moment i would set:
>> >defaults,nobarrier,discard,logbsize=256k,noikeep
>> >> >> or is just "default" the best solution and xfs detect itself whats
>> >best.
>> >> >>
>> >> >> can you guide me a bit?
>> >> >>
>> >> >> as eleavtor i set elevator=noop
>> >> >>
>> >> >> i setup disks with linux softraid raid1. On top of the raid is LVM
>> >> >> (for some data partations).
>> >> >>
>> >> >>
>> >> >> would be nice to hear some tipps from you
>> >> >
>> >> > Unless you have specific requirements or have the knowledge to
>> >> > understand how the different options affect behaviour, then just
>> >use
>> >> > the defaults.
>> >>
>> >> Mostly agreed, but using "discard" would be a no-brainer for me. I
>> >> suppose XFS does not automatically switch it on for non-rotational
>> >> storage.
>> >
>> >Yup, you're not using your brain. :P
>> >
>> >mount -o discard *sucks* on so many levels it is not funny. I don't
>> >recommend that anybody *ever* use it, on XFS, ext4 or btrfs.  Just
>> >use fstrim if you ever need to clean up a SSD.
>>
>
>> In particular trim is a synchronous command in many SSDs, I don't
>> know about the impact on the kernel block stack.
>
> blkdev_issue_discard() is synchronous as well, which is a big
> problem for something that needs to iterate (potentially) thousands
> of regions for discard when a journal checkpoint completes....
>
>> For the SSD
>> itself that means the SSDs basically flush their write cache on
>> every trim call.
>
> Oh, it's worse than that, usually. TRIM is one of the slowest
> operations you can run on many drives, so it can take hundreds of
> milliseconds to execute....
>
>> I often tell people to do performance testing with and without it
>> and report back to me if they see no degradation caused by -o
>> discard.  To date no one has ever reported back.  I think -o
>> discard should have never been introduced and certainly not 5
>> years ago.
>
> It was introduced into XFS as a checkbox feature. We resisted as
> long as we could, but too many people were shouting at us that we
> needed realtime discard because ext4 and btrfs had it. Of course,
> all those people shouting for it realised that we were right in that
> it sucked the moment they tried to use it and found that performance
> was woeful. Not to mention that SSD trim implementations were so bad
> that they caused random data corruption by trimming the wrong
> regions, drives would simply hang randomly and in a couple of cases
> too many trims too fast would brick them...
>
> So, yeah, it was implement because lots of people demanded it, not
> because it was a good idea.
>
>> In theory, SSDs that handle trim as a asynchronous
>> command are now available, but I don't know any specifics.
>
> Requires SATA 3.1 for the queued TRIM, and I'm not sure that there
> is any hardware out there that uses this end-to-end yet. And the
> block layer can't make use of it yet, either...
>
>> In any case, fstrim works for almost all workloads and doesn't
>> have the potential continuous negative impact of -o discard.
>
> Precisely my point - you just gave some more detail. :)
>
Yes, I was only attempting to elaborate on your answer, but thanks for
elaborating on mine.

Greg

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux