Re: WD Red vs Black drives for RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

Zitat von John Stoffel <john@xxxxxxxxxxx>:
"Jens-U" == Jens-U Mozdzen <jmozdzen@xxxxxx> writes:

Jens-U> Hi John,
Jens-U> Zitat von John Stoffel <john@xxxxxxxxxxx>:
Guys,

I'm starting to get tons of errors on my various mixed 1 and 2Tb
drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
It's time to start replacing them and I think I want to either go with
the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
SSDs to use with lvmcache for speedup.

Any comments?

Jens-U> How are the drives to be attached to the server?

I'm planning on just hooking them into the:

  Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008
  PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

according to WD support, hooking the Reds to the SAS adapter directly should be no problem. It's said to be the extender to cause the trouble.

[...]
Jens-U> We found these WD Reds to be a bit slow, but really liked the
Jens-U> power consumption / heat aspects of the drives and of course
Jens-U> the price per GB. As we paired the disks with SSD caching,
Jens-U> actual disk speed was no issue in our case.

Were you using lvmcache?  How did you like it?  Any problems or
issues?  SSD prices are down enough now to make it really tempting to
just get a pair of big 4Tb drives and then the smaller SSDs for
caching, but I'm concerned about reliability and durability.  Which is
why I tend to triple mirror my RAID1 drives...

we're using bcache, which is working nicely for us, but required lots of work to get there (bug fixes are mostly on the corresponding mailing list, not upstream. And there were some nasty bugs, indeed).

We're using both read & write caching, with really positive results: iowait without caching easily is above 25% on the machine, but drops down to 4% with SSD caching. Since even when moving dirty buffers from SSD to HDD, the SSD cache responds to most of the read requests, user experience is fairly good.

We've set up RAID6 for the HDD backing store and a 2-SSD-RAID1 for the cache... and on top of each logical volume we have DRBD replication to a backup server (which originally was for running backups, but served nicely when the RAID6 went down).

The SSD cache is 128GB, with typically less than 4GB dirty cache lines - so plenty of read cache, too.

Regards,
Jens

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux