Re: Preventing clvmd timeouts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/26/2012 11:27 AM, Alan Brown wrote:
> On 26/01/12 16:05, Digimer wrote:
> 
>>> Is anyone actually using DRDB for serious cluster implementations (ie,
>>> production systems) or is it just being used for hobby/test rigs?
>>
>> I use it rather extensively in production. I use it to back clustered
>> LVM-backed virtual machines and GFS2 partitions. I stick with 8.3.x
>> (8.3.12 now) and have no problems with it.
> 
> What kind of load are you throwing at this setup?
> 
> Right now we're using a SAN but I've been asked to look into
> geographical redundancy and there's only ethernet available between the
> locations.
> 
> AB

Geographic DRBD will be tricky if you intend to maintain synchronous
replication. This is because, in synchronous setup, each write is not
confirmed until the data has been committed to both nodes, effectively
dropping the disk speed (throughput and latency) to that of the network.

Linbit does have a "proxy" asynchronous configuration which is designed
for WAN/stretch clusters, but I have not played with it. Linbit should
be able to help you decide if it would suit your needs.

In my case, I build a good number of 2-node clusters backed by DRBD
hosting 4~5 VMs per cluster. The main issue I have is dealing with high
seek latency caused by highly random I/O coming from the VMs being on
physically different parts of the platters triggering a lot of
read/write head movement on platter drives. This is below DRBD though,
and DRBD itself has never been a bottle neck for me.

Back to the seek i/o issue; I worked around it using varying
combinations of high rpm drives, write-cache HBAs and splitting arrays
up to keep high disk i/o VMs on separate platters from one another. A
common setup is 4 or 6 disks per node, split into two RAID level 1 or 5
arrays backing two separate DRBD resources. Then I put clustered LVM on
each resource and mix high and low disk i/o load VMs on the arrays.
Having decent write caching helps a good deal.

One this I plan to test soon is LSI's Cachecade v2 which lets you use
standard SSDs as large read/write caches.

-- 
Digimer
E-Mail:              digimer@xxxxxxxxxxx
Papers and Projects: https://alteeve.com

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux