Re: Building a Back Blaze style POD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]





On Sun, May 8, 2011 at 8:03 PM, Jason <slackmoehrle.lists@xxxxxxxxx> wrote:
Hi All,

I am about to embark on a project that deals with allowing information archival, over time and seeing change over time as well. I can explain it a lot better, but I would certainly talk your ear off. I really don't have a lot of money to throw at the initial concept, but I have some. This device will host all of the operations for the first few months until I can afford to build a duplicate device. I already had a few parts of the idea done and ready to get live.

I am contemplating building a BackBlaze Style POD. The goal of the device is to start acting as a place to have the crawls store information, massage it, get it into db's and then notify the user the task is done so they can start looking at the results.

For reference here are a few links:

http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

and

http://cleanenergy.harvard.edu/index.php?ira=Jabba&tipoContenido=sidebar&sidebar=science

There is room for 45 drives in the case (technically a few more).

45 x 1tb 7200rpm drives is really cheap, about $60 each.

45 x 1.5tb 7200rpm drives are about $70 each.

45 x 2tb 7200rpm drives are about $120 each

45 x 3tb 7200rpm drives are about $180-$230 each (or more, some are almost $400)

I have question before I commit to building one and I was hoping to get advice.

1. Can anyone recommend a mobo/processor setup that can hold lots of RAM? Like 24gb or 64gb or more?

Any brand server motherboard will do. I prefer supermicro, but you can use Dell, HP, Intell, etc, etc. 
 

2. Hardware RAID or Software RAID for this?

Hardware RAID will be expensive on 45 drives. IF you can, split the 45 drives into a few smaller RAID arrays. To rebuild 1x large 45TB RAID array, with either hardware or software would probably take a week, or more, depending on which RAID type you use - i.e. RAID 5, or 6, or 10. I prefer RAID 10 since it's best for speed and the rebuilds are the quickest. But you loose half the space, i.e. 45TB drives will give you about 22TB space. 45x 2TB HDD's would give you about 44TB space though. 
 

3. Would CentOS be a good choice? I have never used CentOS on a device so massive. Just ordinary servers, so to speak. I assume that it could handle so many drives, a large, expanding file system.

Yes it would be fine. 

 

4. Someone recommended ZFS but I dont recall that being available on CentOS, but it is on FreeBSD which I have little experience with.

I would also prefer to use ZFS for this type of setup. use one 128GB SL type SSD drive as a cache drive to speed up things and 2x log drives to help with drive recovery. With ZFS you would be able to use one large RAID array if you have the log drives since it was recover from driver failure much better than other file systems. Although you can install ZFS as user-land tools, which will be slower than running it via the kernel. But, it would be better to use Solaris or FreeBSD for this - look @ Nexenta / FreeNAS / OpenIndia for this. 
 

5. How would someone realistically back something like this up?

To another one as large :)

OR, more realistically, if you already have some backup servers, and the full 45TB isn't full of data yet, then simply backup what you have. By the sounds of it your project is still new so your data won't be that much. I would simply build a gluster / CLVM cluster of smaller cheaper servers - which basically allows you to add say 4TB / 8TB (depending on what chassis you use and how many drives it can take) at a time to the backup cluster, which will be cheaper than buying another one identical to this right now. 
 

Ultimately I know over time I need to distribute my architecture out and have a number of web-servers, balancing, etc but to get started I think this device with good backups might fit the bill.

If this device will be used for web + mail + SQL, then you may probably look at using 4 quad core CPU's + 128GB RAM. With this many drives (or rather, this much data) you'll probably run out of RAM / CPU / Network resources before you run out of HDD space. 



With a device this big (in terms of storage) I would rather have 2 separate "processing" servers which just mounts LUN's from this POD (exported as NFS / iSCSI /  FCoE  / etc) and then have a few faster SAS / SSD drives for SQL / log processing. 
 

I can be way more detailed if it helps, I just didn't want to clutter with information that might not be relevant.
--
Jason

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos



--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux