Re: high throughput storage server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 15, 2011 at 7:39 AM, David Brown <david@xxxxxxxxxxxxxxx> wrote:
> This brings up an important point - no matter what sort of system you get
> (home made, mdadm raid, or whatever) you will want to do some tests and
> drills at replacing failed drives.  Also make sure everything is well
> documented, and well labelled.  When mdadm sends you an email telling you
> drive sdx has failed, you want to be /very/ sure you know which drive is sdx
> before you take it out!

Agreed!  This will be a learn-as-I-go project.

> You also want to consider your raid setup carefully.  RAID 10 has been
> mentioned here several times - it is often a good choice, but not
> necessarily.  RAID 10 gives you fast recovery, and can at best survive a
> loss of half your disks - but at worst a loss of two disks will bring down
> the whole set.  It is also very inefficient in space.  If you use SSDs, it
> may not be worth double the price to have RAID 10.  If you use hard disks,
> it may not be sufficient safety.

And that's what has me thinking about cluster filesystems.
Ultimately, I'd like a pool of storage "nodes".  These could live on
the same physical machine, or be spread across multiple machines.  To
the clients, this pool of nodes would look like one single collection
of storage.  The benefit of this, in my opinion, is flexibility
(mainly easy to grow/add new nodes), but also a bit more safety.  If
one node dies, it doesn't take down the whole pool, just the files on
that node become unavailable.

Even better would be a "smart" pool, that, when a new node is added,
it automatically re-distributes all the files, so that the new node
has the same kind of space utilization as all the others.

> It is probably worth having a small array of SSDs (RAID1 or RAID10) to hold
> the write intent bitmap, the journal for your main file system, and of
> course your OS.  Maybe one of these absurdly fast PCI Express flash disks
> would be a good choice.

Is that really necessary, though, when writes account for probably >5%
of total IO operations?  And (relatively speaking) write performance
is unimportant?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux