Re: DRBD very slow....

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Fri, 2009-07-24 at 10:21 +0400, Roman Savelyev wrote:
> 1. You are hit by Nagel alghoritm (slow TCP response). You can build DRBD 
> 8.3. In 8.3 "TCP_NODELAY" and "QUICK_RESPONSE" implemented in place.
> 2. You are hit by DRBD protocol. In most cases, "B" is enought.
> 3. You are hit by triple barriers. In most cases you are need only one of 
> "barrier, flush,  drain" - see documentation, it depens on type of storage 
> hardware.
> 

I have googled the triple barriers thing but cant find that much
information.

Would it help if I used IPv6 instead of IPv4?

Ross, here are the results of those tests you suggested:
________________________________________________________________________________________
For completeness here is my current setup:

host1: 10.99.99.2
Xeon Quad-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses

host2: 10.99.99.1
Xeon Dual-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses

common:
hosts are connected to local LAN
and directly to each other with a CAT6 gigabit crossover.

I have 6 DRBDs running for 5 domUs over the back to back link.
DRBD version drbd82-8.2.6-1.el5.centos
_______________________________________________________________________
_______________________________________________________________________




Ok, here is what I have done:

_______________________________________________________________________
I have added the following to the drbd config:
disk { no-disk-flushes;
         no-md-flushes; }

That made the resync go up to 50MB/sec after I issued a
drbdsetup /dev/drbdX syncer -r 110M

It used to stick around at 11MB/sec

As far as i can tell it has improved the domUs disk access as well.

I do see that there are a lot of warnings to be heeded with disk and 
metadata flushing......
_______________________________________________________________________

iperf results:

on host 1:
# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  5] local 10.99.99.1 port 5001 connected with 10.99.99.2 port 58183
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  1.16 GBytes    990 Mbits/sec


on host 2:
# iperf -c 10.99.99.1
------------------------------------------------------------
Client connecting to 10.99.99.1, TCP port 5001
TCP window size: 73.8 KByte (default)
------------------------------------------------------------
[  3] local 10.99.99.2 port 58183 connected with 10.99.99.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.16 GBytes    992 Mbits/sec


I am assuming those results are to be expected from a back to back
gigabit.
_______________________________________________________________________

the dd thing.
I think I did this completely wrong, how is this supposed to be done?

this is what i did

host 1:
nc -l 8123 | dd of=/mnt/data/1gig.file oflag=direct
(/mnt/data is an ext3 FS in LVM mounted on dom0)
(Not drbd) i first wanted to try it locally.

host 2:
date; dd if=/dev/zero bs=1M count=1000 | nc 10.99.99.2 8123 ; date


I did not wait for it to finish... according to ifstat the average speed
I got during this transfer was 1.6MB/sec

_______________________________________________________________________

Any tips would be greatly appreciated.

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux