Re: RAID 1 using SSD and 2 HDD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 7/28/2011 3:46 PM, Roberto Spadim wrote:
> hum make some benchmarks... about 4 months ago, i tested this with
> korn andreas and we got not astronomical speed with write behind or
> without it do you have some benchmarks to check?

No, not yet, but I'll make some up.  The problem here is that the
perceived speed increase is only really visible in certain situations.
For example, if I'm building code, and repeatedly compiling, tweaking,
compiling, tweaking, etc., then the entire set of code ends up in page
cache and everything is done from memory, in which case the SSD makes no
performance difference what so ever.  Any benchmark that operates
entirely in cache will be a useless measure here.

The biggest area where the SSD makes a difference is in reading pages
into page cache.  Aka, cold cache reads.  You see this immediately on
bootup for example.  My machine boots fast enough that it makes it to a
login prompt before the ethernet device is done negotiating link speed
and presents a link up to the networking layer (the fact that my
workstation is Fedora 15 and uses upstart helps with this too, but on
another machine without an SSD and with Fedora 15 I don't get to login
prompt before ethernet link is up).  It also effects things like the
startup speed of both Firefox and Thunderbird.  This is where the cold
cache performance of an SSD helps.

However, your question did get me started thinking and raised a few
questions of my own (hence why I Cc:ed Neil).

In order to do writemostly, you *must* use a write bitmap.  If you use
an internal bitmap, it exists on all devices.  Normally, bitmap sets are
done synchronously.  My question is: when one device is an SSD, do we
only wait on the SSD bitmap update before starting the SSD writes, or do
we wait on all the bitmap updates to complete before starting any of the
writes?  If the later, could that be changed?  And as a general question
about bitmap files instead of internal bitmaps, is it even possible to
use an external bitmap file on the root filesystem given that no other
filesystem is mounted in order to read a bitmap file prior to the root
filesystem going live?  It would be nice to use an external bitmap file
on the SSD itself and skip the internal bitmap I think.

As a separate issue, I think a person could probably tweak a few things
for an SSD in the general block layer too.  I haven't played with these
things yet, but I plan to.  Things like changing the elevator to the
noop elevator.

Now, as for the differences between using an SSD like I am versus the
two cache type uses that were brought up.

The cache type usage has the benefit that it covers all the data on the
drives even if the SSD is smaller than the drives.  Like any cache
though, it can only hold the most commonly/frequently used items.  If
your list of commonly used stuff is too large for the SSD, it will start
to suffer cache misses and loose its benefit.  On writes though it can
be very fast because it can simply buffer the write and then return to
the OS as though the write is complete.  However, unless the caching
implementation waits for at least one drive to acknowledge and complete
the write (and especially if it accepts the write but only queues it up
for write to the drives and then waits some period of time before
flushing the write), then this represents a single point of failure that
could cause (possibly huge amounts of) data loss.

The setup I have essentially splits the data on my filesystem according
to what I want cached.  I want applications so I get my performance
boost on startup.  I want my source code repos so I can compile faster
and do things like git checkouts faster.  But I don't need any mp3s or
video files or rarely accessed documents on the SSD, so having a
directory in my home directory to put all my stuff I want accessed fast
and the rest of my home directory just on hard drive works perfectly
well.  In my usage, if the SSD fails, then I don't have to worry about
any data loss and the machine keeps chugging along.

Anyway, about the benchmarks, I'll see what I can do over the weekend.
Today, real work beckons ;-)

- -- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: CFBFF194
	      http://people.redhat.com/dledford

Infiniband specific RPMs available at
	      http://people.redhat.com/dledford/Infiniband
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk4ytf4ACgkQQ9aEs6Ims9gHawCg2hM/pptUgMvY2unZiXgmgACm
2YkAoPre0he5O/+gLNi5qZFyl9F149wg
=6tvn
-----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux