Re: Optimising ssd setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Hi,

Am 07.01.2013 17:24, schrieb Mike Cloaked:
> but I am unclear if this happens automatically or
> not when setting up GPT partitions with gparted?

These days all of this happens automatically. Personally I'm not using
gparted, but I would be surprised, if there are still problems in
regards to proper alignment.

> Also I have been seeing various bits of advice about ensuring that
> excessive writes are avoided by using a non-default IO scheduler

To be honest, I don't see why a IO scheduler would make much of a
difference here. The scheduler is supposed to decide, which block should
be written and/or read next.

The scheduler is of more importance when it comes to reading data.
Schedulers designed for rotational media may prioritize the list queued
blocks with the rotation of the media in mind. For instance it probably
makes more sense to read consecutively as long as possible in case of
rotational media. SSDs on the other hand don't profit from consecutive
reads, and so it could make more sense to actually read the blocks just
like requested.

Keep in mind that these days with AHCI there is also NCQ, which allows
the controller of the HDD/SSD to rearrange requests for itself, so its
not just up to the scheduler, but also depends on the firmware of the
drive in question.

Furthermore, like basically every choice in regards to schedulers, it
very much depends on your workload. Personally I'm running with the
default values on various setups with SSDs and haven't noticed any
problems, especially because the kernel is smart enough to detect
whether or not the attached drive is "rotational" or not.

> In addition it is suggested that for a machine with a reasonable RAM (in my
> case 8GiB) then reducing the "swappiness" parameter to 1 via systemctl

With enough RAM (and 8 GiB are probably more than enough for most
workloads) it shouldn't make much of a difference at all, as your system
isn't swapping anything at all - at least mine isn't.

[root@vpcs ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7853       5288       2564          0        281       2499
-/+ buffers/cache:       2508       5344
Swap:         8191          0       8191


> I have also seen it suggested that TRIM support is important

Well, it depends. TRIM (in theory) only increases the write speed
(respectively keeps it constant), because the cells will already be
erased when needed - just like when your drive is new. Without TRIM the
controller will have to erase the cells directly before writing to them,
which will obviously slow things down.

However you should keep in mind that in case you have some encrypted
containers this is some sort of information leakage, see [1]. Therefore
in this scenarios the default is to keep TRIM disabled.

Again, personally I haven't noticed much of an impact with my setups,
however I don't care when the write speed isn't as fast as advertised
and/or when the drive is new. I can live perfectly fine when it gets (a
little bit) slower over time.

> Finally I have seen suggested that the "noatime" flag be used for mounting
> SSD drives.

Yeah, that's pretty much the only "optimization" I've done here. To be
honest I think the "atime" option is quite stupid and I'm not alone with
this opinion, see [2].

> and has
> experience of SSD wear issues

I'm pretty sure that there is not a single person with "wear issues" on
this mailing list. This whole topic of "wearing" is pretty much
overrated. Current wear leveling technologies are pretty sophisticated
and most of the horrors scenarios you read about are made up.

A reputable magazine (called c't) here in Germany has tested a few SSDs
for their longevity (see [3]). They came to the conclusion that SSDs
reach the specified amount of written data easily, although they might
get slower after that they still work just fine. Furthermore there is
some sort of "self healing" involved with flash cells. Once you get some
read errors and leave the cell alone for itself long enough, chances are
good that due to various effects the data can finally be retrieved (see
[3]).

Far more critical are firmware bugs in the various controllers of recent
SSDs. There have been quite a few serious issues with complete data loss
as a result, so this should be your greatest concern - by far.

Furthermore you should have backups of important stuff anyway ...

> whether partitioning and installing essentially with
> defaults is going to lead to SSD problems

I haven't noticed any issues and beside the "noatime" mount option
haven't changed anything.

I don't know whether or not you've already read the Wiki article, see
[4]. I think its a good starting point for all of your questions and
most of it got covered there.

Best regards,
Karol Babioch

[1] http://asalor.blogspot.de/2011/08/trim-dm-crypt-problems.html
[2] http://kerneltrap.org/node/14148
[3] https://www.heise.de/artikel-archiv/ct/2012/03/66
[4] https://wiki.archlinux.org/index.php/SSD

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]
  Powered by Linux