Re: 4.4/64-bit Supermicro/ Nvidia RAID [thanks]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



John R Pierce wrote:
Feizhou wrote:
not so. I used to run boxes that handled 600 to 1000 smtp connections each. Creating and deleting thousands of small files was the environment I worked in.

and if the power failed in the middle of this, how many messages were lost?

Heh, which boxes, the 3ware ones (750x series no battery backed cache...in fact no cache at all!) or the ide disks only ones or the scsi only ones or the compaq hardware raid scsi ones? Ans: Only on two occasions have I got corrupted queue files and that was because I used the XFS filesystem which is a disaster in a case of power loss. That was on a 3ware box. Had no problems with the rest.



I must say that the 3ware cards on those boxes that had them were not 955x series and therefore had no cache. Perhaps things would have been different if it were 955x cards in there but at that time, the 9xxx 3ware cards were not even out yet.

We typically use 15000 rpm scsi or fiberchannel storage for our databases, not SATA.

OOH, nice hardware you have for your uber databases. The outfit I worked for did well with mysql + software raid/3ware + ide disks. No need for FC.


Since you have clearly pointed out the performance benefit really comes from the cache (if you have enough) on the board I do not see why using software raid and a battery-backed RAM card like the umem or even the gigabyte i-ram for the journal of the filesystem will be any less slow if at all.

its not the file system journal I'm talking about, its an application specific journal file, which contains the indicies and state of the queue files, of which there's a very large number constantly being written. We need to flush the queue files AND the journal files for it to be safe. These run around 10GB total as I understand it (not each flush, but the aggregate queues can be this big).

Now you are telling me that somehow you have code that makes your database stuff its journal on your RAID controller's cache. Cool, mind sharing it with the rest of us?

Let me just say that I know that the code in the kernel for RAID controllers that have cache will, as you say, give the OK once the data that needs to be written hits the cache.

In the case of a RAM card, I am pointing out that that effect can be achieved by putting the journal of a journaling filesystem like ext3 on the RAM card especially since ext3 supports data journaling too.

If the aggregate queues are up to 10GB, I really wonder wonder how much faster your hardware raid makes things unless of course your cache is much larger than 2GB. Just on the basis of the inadequate size of your cache I would give software raid + RAM card the benefit of the doubt.


if the server has a writeback enabled controller like a HP Smart Array 5i/532, it all works great. if it doesn't, it all grinds to a halt. quite simple, really. We have absolutely no desire to start architecting around 3rd party ram/battery disks, they won't be supported by our production system vendors, and they will make what is currently a fairly simple and robust system a lot more convoluted..

Yada yada. The compaqs that had hardware raid with scsi disks were the slowest performers of all those boxes I managed not to mention that lack of any tools under 2.6 for monitoring or whatever too. I am not telling you to what hardware to use. What I am doing is contesting your claims of hardware raid with battery-backed cache being hugely faster than software raid. I will concede that there will be cases where it will indeed be hugely faster but not always.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux